Frontier‑level coding and agentic performance. Claude Sonnet 4.5 leads on real‑world coding tasks and shows substantial gains in computer use, reasoning, and math. Designed for long‑horizon, multi‑step work across IDE + terminal workflows, large codebases, and complex agents — a drop‑in upgrade over previous Sonnet versions.
Added Sep 29, 2025
Context Window
1.0M
Max Output
64.0K
Input Price (Auto)
$2.99/1M
Output Price (Auto)
$14.99/1M
Cache Read (Auto)
$0.30/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
37.1
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
33.5
GPQA Diamond
Graduate-level scientific reasoning
72.7%
Better than 67% of models compared
HLE
Humanity's Last Exam
7.1%
Better than 61% of models compared
IFBench
Instruction-following benchmark
42.7%
Better than 53% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
70.5%
Better than 70% of models compared
AA-LCR
Long context reasoning evaluation
51.3%
Better than 68% of models compared
SciCode
Python programming for scientific computing
42.8%
Better than 90% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
28.8%
Better than 78% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
37.0%
Better than 38% of models compared
MMLU-Pro
Professional and academic subject knowledge
86.0%
Better than 94% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
59.0%
Better than 65% of models compared