Deepseek V3.2 Exp, Deepseek's latest model offering far better performance especially on longer contexts than its predecessors. Current flagship model by Deepseek. FP8.
Context Window
163.8K
Max Output
65.5K
Input Price (Auto)
$0.28/1M
Output Price (Auto)
$0.42/1M
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
28.4
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
30.0
Agentic Index
31.0
GPQA Diamond
Graduate-level scientific reasoning
73.8%
Better than 69% of models compared
HLE
Humanity's Last Exam
8.6%
Better than 67% of models compared
IFBench
Instruction-following benchmark
43.1%
Better than 55% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
33.9%
Better than 48% of models compared
AA-LCR
Long context reasoning evaluation
43.0%
Better than 60% of models compared
GDPval-AA
Economically valuable tasks
28.7%
Better than 91% of models compared
CritPt
Research-level physics reasoning
1.4%
Better than 88% of models compared
SciCode
Python programming for scientific computing
39.9%
Better than 83% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
25.0%
Better than 74% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
57.7%
Better than 55% of models compared
MMLU-Pro
Professional and academic subject knowledge
83.6%
Better than 85% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
22.7%
Better than 76% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
55.4%
Better than 62% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
90.3%
Better than 16% of models compared