MiniMax M2.1 builds on M2 with enhanced context understanding and improved complex tool use. 230B parameter MoE model (10B active) optimized for agentic workflows and long-horizon tasks.
Added Jan 20, 2025
Context Window
200.0K
Max Output
131.1K
Input Price (Auto)
$0.13/1M
Output Price (Auto)
$0.50/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
39.4
Choose explicit providers for this model. Auto routing remains available as the default option.
Loading provider options…
Coding Index
32.8
Agentic Index
47.4
GPQA Diamond
Graduate-level scientific reasoning
83.0%
Better than 87% of models compared
HLE
Humanity's Last Exam
22.2%
Better than 90% of models compared
IFBench
Instruction-following benchmark
69.9%
Better than 88% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
85.4%
Better than 81% of models compared
AA-LCR
Long context reasoning evaluation
59.0%
Better than 79% of models compared
GDPval-AA
Economically valuable tasks
29.4%
Better than 93% of models compared
CritPt
Research-level physics reasoning
0.3%
Better than 76% of models compared
SciCode
Python programming for scientific computing
40.7%
Better than 86% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
28.8%
Better than 78% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
82.7%
Better than 79% of models compared
MMLU-Pro
Professional and academic subject knowledge
87.5%
Better than 98% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
20.5%
Better than 65% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
81.0%
Better than 93% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
67.0%
Better than 83% of models compared