GPT 4.1 is the new flagship model from OpenAI. Huge context window (1 mln tokens), outperforms GPT-4o and GPT 4.5 across coding and does very well at understanding large contexts.
Added Sep 10, 2025
Context Window
1.0M
Max Output
32.8K
Input Price (Auto)
$2.00/1M
Output Price (Auto)
$8.00/1M
Cache Read (Auto)
$0.50/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
26.3
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
21.8
GPQA Diamond
Graduate-level scientific reasoning
66.6%
Better than 56% of models compared
HLE
Humanity's Last Exam
4.6%
Better than 31% of models compared
IFBench
Instruction-following benchmark
43.0%
Better than 55% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
47.1%
Better than 59% of models compared
AA-LCR
Long context reasoning evaluation
61.0%
Better than 82% of models compared
SciCode
Python programming for scientific computing
38.1%
Better than 76% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
13.6%
Better than 57% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
34.7%
Better than 35% of models compared
AIME
American Invitational Mathematics Examination
43.7%
Better than 65% of models compared
MMLU-Pro
Professional and academic subject knowledge
80.6%
Better than 70% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
45.7%
Better than 52% of models compared
Math-500
Diverse mathematical problem solving benchmark
91.3%
Better than 67% of models compared