OpenAI's new flagship series of reasoning models for solving hard problems. Useful when tackling complex problems in science, coding, math, and similar fields.
Context Window
128.0K
Max Output
32.8K
Input Price (Auto)
$14.99/1M
Output Price (Auto)
$59.99/1M
Cache Read (Auto)
$7.50/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
30.7
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
20.5
GPQA Diamond
Graduate-level scientific reasoning
74.7%
Better than 70% of models compared
HLE
Humanity's Last Exam
7.7%
Better than 64% of models compared
IFBench
Instruction-following benchmark
70.3%
Better than 89% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
62.6%
Better than 65% of models compared
AA-LCR
Long context reasoning evaluation
59.3%
Better than 81% of models compared
SciCode
Python programming for scientific computing
35.8%
Better than 66% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
12.9%
Better than 55% of models compared
AIME
American Invitational Mathematics Examination
72.3%
Better than 82% of models compared
Math-500
Diverse mathematical problem solving benchmark
97.0%
Better than 87% of models compared
MMLU-Pro
Professional and academic subject knowledge
84.1%
Better than 88% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
67.9%
Better than 76% of models compared