The coding-specialized version of Grok 4. Content policy rejections can still be charged: xAI may pass through a $0.05 moderation-failure fee, or a $0.055 usage-guidelines violation fee, depending on which rejection upstream returns.
Added Aug 26, 2025
Context Window
256.0K
Max Output
131.1K
Input Price (Auto)
$0.20/1M
Output Price (Auto)
$1.50/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
28.7
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
23.7
Agentic Index
35.6
GPQA Diamond
Graduate-level scientific reasoning
72.7%
Better than 67% of models compared
HLE
Humanity's Last Exam
7.5%
Better than 63% of models compared
IFBench
Instruction-following benchmark
41.4%
Better than 50% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
75.7%
Better than 74% of models compared
AA-LCR
Long context reasoning evaluation
48.3%
Better than 66% of models compared
GDPval-AA
Economically valuable tasks
13.2%
Better than 65% of models compared
CritPt
Research-level physics reasoning
0.0%
Better than 36% of models compared
SciCode
Python programming for scientific computing
36.2%
Better than 69% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
17.4%
Better than 64% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
43.3%
Better than 44% of models compared
MMLU-Pro
Professional and academic subject knowledge
79.3%
Better than 64% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
23.8%
Better than 81% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
65.7%
Better than 74% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
78.5%
Better than 62% of models compared