Mistral Small 4 is a hybrid MoE model that unifies instruct, reasoning, and coding behavior in a single multimodal model. It supports text and image input, native function calling, JSON output, and per-request reasoning effort controls.
Added Mar 16, 2026
Context Window
262.1K
Max Output
16.4K
Input Price (Auto)
$0.40/1M
Output Price (Auto)
$1.40/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
18.6
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
16.4
GPQA Diamond
Graduate-level scientific reasoning
57.1%
Better than 42% of models compared
HLE
Humanity's Last Exam
3.7%
Better than 8% of models compared
IFBench
Instruction-following benchmark
32.8%
Better than 25% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
18.4%
Better than 19% of models compared
AA-LCR
Long context reasoning evaluation
21.3%
Better than 39% of models compared
SciCode
Python programming for scientific computing
28.1%
Better than 45% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
10.6%
Better than 51% of models compared
Last updated May 15, 2026
Artificial Analysis