DeepSeek-V3.1-Terminus. The latest update builds on V3.1's strengths while addressing key user feedback. Language consistency improvements (fewer CN/EN mix-ups, no random chars), stronger Code Agent & Search Agent performance, and more stable, reliable outputs across benchmarks. FP8.
Added Aug 2, 2025
Context Window
128.0K
Max Output
65.5K
Input Price (Auto)
$0.26/1M
Output Price (Auto)
$0.73/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
28.5
Choose explicit providers for this model. Auto routing remains available as the default option.
Loading provider options…
Coding Index
31.9
Agentic Index
28.6
GPQA Diamond
Graduate-level scientific reasoning
75.1%
Better than 71% of models compared
HLE
Humanity's Last Exam
8.4%
Better than 67% of models compared
IFBench
Instruction-following benchmark
41.2%
Better than 49% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
37.1%
Better than 53% of models compared
AA-LCR
Long context reasoning evaluation
43.3%
Better than 61% of models compared
GDPval-AA
Economically valuable tasks
23.8%
Better than 80% of models compared
CritPt
Research-level physics reasoning
0.0%
Better than 36% of models compared
SciCode
Python programming for scientific computing
32.1%
Better than 57% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
31.8%
Better than 82% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
53.7%
Better than 51% of models compared
MMLU-Pro
Professional and academic subject knowledge
83.6%
Better than 85% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
23.4%
Better than 79% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
52.9%
Better than 60% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
85.3%
Better than 38% of models compared