Optimized for agentic reasoning workloads with a 2M context window while retaining fast tool-calling performance. Content policy rejections can still be charged: xAI may pass through a $0.05 moderation-failure fee, or a $0.055 usage-guidelines violation fee, depending on which rejection upstream returns.
Added Nov 20, 2025
Context Window
2.0M
Max Output
131.1K
Input Price (Auto)
$0.20/1M
Output Price (Auto)
$0.50/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
38.6
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
30.9
Agentic Index
49.3
GPQA Diamond
Graduate-level scientific reasoning
85.3%
Better than 91% of models compared
HLE
Humanity's Last Exam
17.6%
Better than 86% of models compared
IFBench
Instruction-following benchmark
52.7%
Better than 71% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
93.3%
Better than 92% of models compared
AA-LCR
Long context reasoning evaluation
68.0%
Better than 93% of models compared
GDPval-AA
Economically valuable tasks
27.2%
Better than 89% of models compared
CritPt
Research-level physics reasoning
2.9%
Better than 93% of models compared
SciCode
Python programming for scientific computing
44.2%
Better than 92% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
24.2%
Better than 72% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
89.3%
Better than 88% of models compared
MMLU-Pro
Professional and academic subject knowledge
85.4%
Better than 92% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
25.3%
Better than 87% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
82.2%
Better than 94% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
72.4%
Better than 77% of models compared