claude-sonnet-4-6

Model Description

Claude Sonnet 4.6 represents a significant leap in the Claude model family, offering performance that previously required an Opus-class model at a more accessible price point. It introduces a massive 1-million-token context window in beta, allowing users to process entire codebases, lengthy contracts, or dozens of research papers in a single request. For individuals on Free and Pro plans, Sonnet 4.6 is now the default experience on claude.ai and Claude Cowork, maintaining the established pricing of $3 per million input tokens and $15 per million output tokens.

The model shows substantial gains in technical tasks, with early testers preferring it over Sonnet 4.5 in approximately 70% of coding cases. Beyond simple bug fixes, Sonnet 4.6 is noted for its ability to read context thoroughly before modifying code and its significant reduction in “laziness” and overengineering compared to previous iterations. Remarkably, developers often preferred it to the November 2025 version of Claude Opus 4.5, citing higher consistency in instruction following, fewer false claims of success, and more reliable follow-through on complex, multi-step tasks.

A standout feature of this release is the refined “computer use” capability. The model interacts with standard software—such as Chrome, VS Code, and LibreOffice—the way a person does, by virtually clicking and typing rather than relying on custom-built APIs. Performance on the OSWorld benchmark has seen steady growth, with the model now capable of managing complex spreadsheets and multi-step web forms across several browser tabs. Furthermore, the model has undergone extensive safety evaluations, demonstrating a “warm, honest, and prosocial” character with improved resistance to prompt injection attacks compared to its predecessors.

In long-horizon planning and business simulations, Sonnet 4.6 has demonstrated advanced strategic thinking. In the Vending-Bench Arena, which simulates running a business over time, the model showcased a sophisticated strategy by investing heavily in capacity during the early stages before pivoting to focus on profitability in the final stretch. This intelligence extends to design and financial work as well; early users report that visual outputs and frontend code are significantly more polished, featuring better layouts and animations that require fewer iterations to reach production-ready quality.

🔔How to Use

graph LR A("Purchase Now") --> B["Start Chat on Homepage"] A --> D["Read API Documentation"] B --> C["Register / Login"] C --> E["Enter Key"] D --> F["Enter Endpoint & Key"] E --> G("Start Using") F --> G style A fill:#f9f9f9,stroke:#333,stroke-width:1px style B fill:#f9f9f9,stroke:#333,stroke-width:1px style C fill:#f9f9f9,stroke:#333,stroke-width:1px style D fill:#f9f9f9,stroke:#333,stroke-width:1px style E fill:#f9f9f9,stroke:#333,stroke-width:1px style F fill:#f9f9f9,stroke:#333,stroke-width:1px style G fill:#f9f9f9,stroke:#333,stroke-width:1px

Purchase Now

Start Chat on Homepage

Register / Login

Enter Key

Read API Documentation

Enter Endpoint & Key

Start Using

Description Ends

Recommend Models

DeepClaude-3-7-sonnet

DeepSeek-R1 + claude-3-7-sonnet-20250219,The Deep series is composed of the DeepSeek-R1 (671b) model combined with the chain-of-thought reasoning of other models, fully utilizing the powerful capabilities of the DeepSeek chain-of-thought. It employs a strategy of leveraging other more powerful models for supplementation, thereby enhancing the overall model's capabilities.

gpt-4.1-2025-04-14

GPT-4.1 is our flagship model for complex tasks. It is well suited for problem solving across domains.

gpt-4.1-nano-2025-04-14

GPT-4.1 nano is the fastest, most cost-effective GPT-4.1 model.