DeepSeek-V3.1 is a hybrid model that supports both thinking mode and non-thinking mode. It does better at tool calling and agent tasks, and has higher thinking efficiency than its predecessor. This is the non-thinking version. Quantized at FP8.
Added Jul 26, 2025
Context Window
128.0K
Max Output
65.5K
Input Price (Auto)
$0.21/1M
Output Price (Auto)
$0.73/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
28.1
Choose explicit providers for this model. Auto routing remains available as the default option.
Loading provider options…
Coding Index
28.4
GPQA Diamond
Graduate-level scientific reasoning
73.5%
Better than 68% of models compared
HLE
Humanity's Last Exam
6.3%
Better than 56% of models compared
IFBench
Instruction-following benchmark
37.8%
Better than 39% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
34.8%
Better than 50% of models compared
AA-LCR
Long context reasoning evaluation
45.0%
Better than 63% of models compared
SciCode
Python programming for scientific computing
36.7%
Better than 71% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
24.2%
Better than 72% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
49.7%
Better than 48% of models compared
MMLU-Pro
Professional and academic subject knowledge
83.3%
Better than 84% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
57.7%
Better than 64% of models compared