DeepSeek V3.2 (non-thinking mode) — official successor to V3.2-Exp. Reasoning-first model built for agents with GPT-5 level performance. Balanced inference vs. output length for everyday use. First DeepSeek model with thinking-in-tool-use capability. FP8.
Added Dec 1, 2025
Context Window
163.0K
Max Output
65.5K
Input Price (Auto)
$0.26/1M
Output Price (Auto)
$0.40/1M
Cache Read (Auto)
$0.026/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
32.1
Choose explicit providers for this model. Auto routing remains available as the default option.
Loading provider options…
Coding Index
34.6
GPQA Diamond
Graduate-level scientific reasoning
75.1%
Better than 71% of models compared
HLE
Humanity's Last Exam
10.5%
Better than 74% of models compared
IFBench
Instruction-following benchmark
49.0%
Better than 66% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
78.9%
Better than 76% of models compared
AA-LCR
Long context reasoning evaluation
39.0%
Better than 58% of models compared
SciCode
Python programming for scientific computing
38.7%
Better than 78% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
32.6%
Better than 83% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
59.0%
Better than 56% of models compared
MMLU-Pro
Professional and academic subject knowledge
83.7%
Better than 86% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
59.3%
Better than 66% of models compared