Amazon's new flagship model. Can handle up to 300k input tokens, with comparable performance to ChatGPT and Claude 3.5 Sonnet.
Added Dec 3, 2024
Context Window
300.0K
Max Output
32.0K
Input Price (Auto)
$0.80/1M
Output Price (Auto)
$3.20/1M
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
13.5
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
11.0
Agentic Index
4.7
GPQA Diamond
Graduate-level scientific reasoning
49.9%
Better than 34% of models compared
HLE
Humanity's Last Exam
3.4%
Better than 4% of models compared
IFBench
Instruction-following benchmark
38.1%
Better than 40% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
14.0%
Better than 14% of models compared
AA-LCR
Long context reasoning evaluation
19.0%
Better than 36% of models compared
GDPval-AA
Economically valuable tasks
0.0%
Better than 18% of models compared
CritPt
Research-level physics reasoning
0.0%
Better than 36% of models compared
SciCode
Python programming for scientific computing
20.8%
Better than 26% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
6.1%
Better than 38% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
7.0%
Better than 11% of models compared
AIME
American Invitational Mathematics Examination
10.7%
Better than 34% of models compared
MMLU-Pro
Professional and academic subject knowledge
69.1%
Better than 33% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
17.0%
Better than 43% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
23.3%
Better than 24% of models compared
Math-500
Diverse mathematical problem solving benchmark
78.6%
Better than 44% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
77.9%
Better than 65% of models compared