DeepSeek-R1 is now live and open source, rivaling OpenAI's Model o1.
Context Window
64.0K
Max Output
65.5K
Input Price (Auto)
$0.40/1M
Output Price (Auto)
$1.70/1M
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
27.1
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
24.0
Agentic Index
3.8
GPQA Diamond
Graduate-level scientific reasoning
81.3%
Better than 85% of models compared
HLE
Humanity's Last Exam
14.9%
Better than 83% of models compared
IFBench
Instruction-following benchmark
39.6%
Better than 45% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
36.5%
Better than 52% of models compared
AA-LCR
Long context reasoning evaluation
54.7%
Better than 72% of models compared
GDPval-AA
Economically valuable tasks
0.0%
Better than 18% of models compared
CritPt
Research-level physics reasoning
0.6%
Better than 80% of models compared
SciCode
Python programming for scientific computing
40.3%
Better than 84% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
15.9%
Better than 60% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
76.0%
Better than 71% of models compared
AIME
American Invitational Mathematics Examination
89.3%
Better than 95% of models compared
MMLU-Pro
Professional and academic subject knowledge
84.9%
Better than 90% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
30.7%
Better than 95% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
77.0%
Better than 88% of models compared
Math-500
Diverse mathematical problem solving benchmark
98.3%
Better than 94% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
89.5%
Better than 19% of models compared