An open-weight 21B parameter model released under the Apache 2.0 license. It uses a Mixture-of-Experts (MoE) architecture with 3.6B active parameters per forward pass, optimized for lower-latency inference and deployability on consumer or single-GPU hardware. The model is trained in OpenAI's Harmony response format and supports reasoning level configuration, fine-tuning, and agentic capabilities including function calling, tool use, and structured outputs.
Added Aug 5, 2025
Context Window
128.0K
Max Output
16.4K
Input Price (Auto)
$0.032/1M
Output Price (Auto)
$0.15/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
24.5
Choose explicit providers for this model. Auto routing remains available as the default option.
Loading provider options…
Coding Index
18.5
Agentic Index
27.6
GPQA Diamond
Graduate-level scientific reasoning
68.8%
Better than 60% of models compared
HLE
Humanity's Last Exam
9.8%
Better than 72% of models compared
IFBench
Instruction-following benchmark
65.1%
Better than 83% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
60.2%
Better than 65% of models compared
AA-LCR
Long context reasoning evaluation
30.7%
Better than 49% of models compared
GDPval-AA
Economically valuable tasks
7.6%
Better than 53% of models compared
CritPt
Research-level physics reasoning
1.4%
Better than 88% of models compared
SciCode
Python programming for scientific computing
34.4%
Better than 62% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
10.6%
Better than 51% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
89.3%
Better than 88% of models compared
MMLU-Pro
Professional and academic subject knowledge
74.8%
Better than 48% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
15.5%
Better than 32% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
77.7%
Better than 89% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
94.1%
Better than 8% of models compared