An open-weight, 117B-parameter Mixture-of-Experts (MoE) language model designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.
Added Feb 3, 2026
Context Window
128.0K
Max Output
16.4K
Input Price (Auto)
$0.041/1M
Output Price (Auto)
$0.19/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
33.3
Choose explicit providers for this model. Auto routing remains available as the default option.
Loading provider options…
Coding Index
28.6
Agentic Index
37.9
GPQA Diamond
Graduate-level scientific reasoning
78.2%
Better than 79% of models compared
HLE
Humanity's Last Exam
18.5%
Better than 87% of models compared
IFBench
Instruction-following benchmark
69.0%
Better than 87% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
65.8%
Better than 67% of models compared
AA-LCR
Long context reasoning evaluation
50.7%
Better than 67% of models compared
GDPval-AA
Economically valuable tasks
22.3%
Better than 79% of models compared
CritPt
Research-level physics reasoning
1.1%
Better than 85% of models compared
SciCode
Python programming for scientific computing
38.9%
Better than 79% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
23.5%
Better than 71% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
93.4%
Better than 94% of models compared
MMLU-Pro
Professional and academic subject knowledge
80.8%
Better than 71% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
21.5%
Better than 71% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
87.8%
Better than 98% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
91.2%
Better than 13% of models compared