Thinking enabled version of Deepseek V3.1. DeepSeek-V3.1 is a hybrid model that supports both thinking mode and non-thinking mode. It does better at tool calling and agent tasks, and has higher thinking efficiency than its predecessor. Quantized at FP8.
Context Window
128.0K
Max Output
65.5K
Input Price (Auto)
$0.20/1M
Output Price (Auto)
$0.70/1M
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
27.7
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
29.7
GPQA Diamond
Graduate-level scientific reasoning
77.9%
Better than 78% of models compared
HLE
Humanity's Last Exam
13.0%
Better than 80% of models compared
IFBench
Instruction-following benchmark
41.5%
Better than 50% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
37.4%
Better than 53% of models compared
AA-LCR
Long context reasoning evaluation
53.3%
Better than 70% of models compared
SciCode
Python programming for scientific computing
39.1%
Better than 79% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
25.0%
Better than 74% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
89.7%
Better than 90% of models compared
MMLU-Pro
Professional and academic subject knowledge
85.1%
Better than 91% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
78.4%
Better than 90% of models compared