
"Despite the fact that Gemini 3 Pro and GPT-5.1-Codex-Max only came close to the coding performance of Sonnet 4.5, Anthropic deemed it necessary to make Opus 4.5 a significantly better software engineer. In a somewhat misleading bar chart (the bars start at 70 percent and the yardstick stops at 82 percent), Opus 4.5 is clearly a step ahead of the competition. Using the widely embraced SWE Bench Verified test, Opus 4.5 scored 80.9 percent, significantly better than Sonnet 4.5 (77.2 percent), Codex-Max (77.9 percent), and Gemini 3 Pro (76.2 percent)."
"The new model is available immediately in all Claude apps, via the API, and on all three major cloud platforms: Azure, GCP, and AWS. At the same time, Anthropic is lowering the prices for the Claude API. The new AI model costs $5 per million input tokens and $25 per million output tokens. This makes Opus models a more realistic option than before, as Anthropic was often on the expensive side in terms of pricing."
Claude Opus 4.5 achieved 80.9% on the SWE Bench Verified test, outperforming Sonnet 4.5 (77.2%), Codex-Max (77.9%), and Gemini 3 Pro (76.2%). The model is positioned as an advancement for coding tasks and agentic AI. Opus 4.5 is available in Claude apps, via the API, and on Azure, GCP, and AWS. API pricing is $5 per million input tokens and $25 per million output tokens. The model uses fewer tokens than predecessors, including Opus 4.1, to reach equal or better results. Opus 4.5 reduces backtracking, redundant exploration, and verbal reasoning. Competition includes Google Gemini 3 Pro and OpenAI GPT-5.1.
Read at Techzine Global
Unable to calculate read time
Collection
[
|
...
]