Mistral Medium 3.5 is a 128B dense open-weights flagship model for instruction-following, reasoning, coding, long-horizon agentic work, tool use, structured output, and multimodal prompts. It supports a 256k context window and configurable reasoning effort.
Added Apr 29, 2026
Context Window
256.0K
Max Output
32.8K
Input Price (Auto)
$1.50/1M
Output Price (Auto)
$7.50/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
39.2
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
35.4
GPQA Diamond
Graduate-level scientific reasoning
74.8%
Better than 70% of models compared
HLE
Humanity's Last Exam
12.8%
Better than 79% of models compared
IFBench
Instruction-following benchmark
68.8%
Better than 87% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
94.2%
Better than 94% of models compared
AA-LCR
Long context reasoning evaluation
61.0%
Better than 82% of models compared
SciCode
Python programming for scientific computing
39.6%
Better than 81% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
33.3%
Better than 85% of models compared
Last updated May 15, 2026
Artificial Analysis