Xiaomi MiMo V2 Flash is a 309B parameter Mixture-of-Experts model (15B active) optimized for high-speed reasoning and agentic workflows. Features a hybrid attention architecture with sliding window and global attention for efficient long-context processing, plus native Multi-Token Prediction for faster inference. Routed only via open source providers.
Context Window
256.0K
Max Output
32.8K
Input Price (Auto)
$0.10/1M
Output Price (Auto)
$0.31/1M
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
30.3
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
25.8
GPQA Diamond
Graduate-level scientific reasoning
65.6%
Better than 54% of models compared
HLE
Humanity's Last Exam
8.0%
Better than 65% of models compared
IFBench
Instruction-following benchmark
39.9%
Better than 47% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
83.9%
Better than 79% of models compared
AA-LCR
Long context reasoning evaluation
31.3%
Better than 51% of models compared
SciCode
Python programming for scientific computing
25.9%
Better than 38% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
25.8%
Better than 75% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
67.7%
Better than 62% of models compared
MMLU-Pro
Professional and academic subject knowledge
74.4%
Better than 47% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
40.2%
Better than 48% of models compared