Gemini 2.5 Pro stable release. Google's most capable generalist model with strong performance across a wide range of tasks.
Added Jun 5, 2025
Context Window
1.0M
Max Output
65.5K
Input Price (Auto)
$2.50/1M
Output Price (Auto)
$10.00/1M
Cache Read (Auto)
$0.25/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
34.6
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
32.0
GPQA Diamond
Graduate-level scientific reasoning
84.4%
Better than 90% of models compared
HLE
Humanity's Last Exam
21.1%
Better than 89% of models compared
IFBench
Instruction-following benchmark
48.7%
Better than 66% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
54.1%
Better than 63% of models compared
AA-LCR
Long context reasoning evaluation
66.0%
Better than 90% of models compared
SciCode
Python programming for scientific computing
42.8%
Better than 90% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
26.5%
Better than 76% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
87.7%
Better than 85% of models compared
AIME
American Invitational Mathematics Examination
88.7%
Better than 95% of models compared
MMLU-Pro
Professional and academic subject knowledge
86.2%
Better than 95% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
80.1%
Better than 92% of models compared
Math-500
Diverse mathematical problem solving benchmark
96.7%
Better than 86% of models compared