DeepSeek's 685B parameter mathematical reasoning model with self-verification capabilities. Achieves gold-level scores on IMO 2025 and CMO 2024, plus 118/120 on Putnam 2024. Built on DeepSeek-V3.2-Exp-Base with generator-verifier architecture for rigorous theorem proving.
Added Dec 3, 2025
Context Window
128.0K
Max Output
65.5K
Input Price (Auto)
$0.60/1M
Output Price (Auto)
$2.20/1M
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
29.4
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
37.9
GPQA Diamond
Graduate-level scientific reasoning
87.1%
Better than 94% of models compared
HLE
Humanity's Last Exam
26.1%
Better than 93% of models compared
IFBench
Instruction-following benchmark
63.9%
Better than 81% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
0.0%
Better than 3% of models compared
AA-LCR
Long context reasoning evaluation
59.3%
Better than 81% of models compared
SciCode
Python programming for scientific computing
44.0%
Better than 92% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
34.8%
Better than 87% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
96.7%
Better than 98% of models compared
MMLU-Pro
Professional and academic subject knowledge
86.3%
Better than 95% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
89.6%
Better than 99% of models compared