Mistral Medium 3.1 is an updated version of Mistral Medium 3, which is a high-performance enterprise-grade language model designed to deliver frontier-level capabilities at significantly reduced operational cost. It balances state-of-the-art reasoning and multimodal performance with 8× lower cost compared to traditional large models, making it suitable for scalable deployments across professional and industrial use cases. The model excels in domains such as coding, STEM reasoning, and enterprise adaptation. It supports hybrid, on-prem, and in-VPC deployments and is optimized for integration into custom workflows. Mistral Medium 3.1 offers competitive accuracy relative to larger models like Claude Sonnet 3.5/3.7, Llama 4 Maverick, and Command R+, while maintaining broad compatibility across deployment environments.
Added Sep 5, 2025
Context Window
131.1K
Max Output
32.8K
Input Price (Auto)
$0.40/1M
Output Price (Auto)
$2.00/1M
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
21.3
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
18.3
Agentic Index
25.3
GPQA Diamond
Graduate-level scientific reasoning
58.8%
Better than 45% of models compared
HLE
Humanity's Last Exam
4.4%
Better than 26% of models compared
IFBench
Instruction-following benchmark
39.8%
Better than 46% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
40.6%
Better than 55% of models compared
AA-LCR
Long context reasoning evaluation
19.7%
Better than 36% of models compared
GDPval-AA
Economically valuable tasks
14.1%
Better than 68% of models compared
CritPt
Research-level physics reasoning
0.0%
Better than 36% of models compared
SciCode
Python programming for scientific computing
33.8%
Better than 61% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
10.6%
Better than 51% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
38.3%
Better than 40% of models compared
MMLU-Pro
Professional and academic subject knowledge
68.3%
Better than 32% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
19.8%
Better than 60% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
40.6%
Better than 49% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
81.3%
Better than 54% of models compared