A fast, cost-effective reasoning model with 1M token context. Supports extended thinking with adjustable depth levels and built-in web grounding and code interpreter tools.
Context Window
1.0M
Max Output
65.5K
Input Price (Auto)
$0.51/1M
Output Price (Auto)
$4.25/1M
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
18.0
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
12.5
Agentic Index
21.1
GPQA Diamond
Graduate-level scientific reasoning
60.3%
Better than 48% of models compared
HLE
Humanity's Last Exam
3.0%
Better than 1% of models compared
IFBench
Instruction-following benchmark
40.5%
Better than 48% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
62.0%
Better than 65% of models compared
AA-LCR
Long context reasoning evaluation
17.7%
Better than 34% of models compared
GDPval-AA
Economically valuable tasks
0.0%
Better than 18% of models compared
CritPt
Research-level physics reasoning
0.0%
Better than 36% of models compared
SciCode
Python programming for scientific computing
24.0%
Better than 34% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
6.8%
Better than 43% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
33.7%
Better than 34% of models compared
MMLU-Pro
Professional and academic subject knowledge
74.3%
Better than 46% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
13.5%
Better than 25% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
34.6%
Better than 42% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
83.9%
Better than 44% of models compared