Fast, cost efficient performance on complex tasks. The workhorse of the Gemini series. Stable release with improved capabilities.
Added Jun 5, 2025
Context Window
1.0M
Max Output
65.5K
Input Price (Auto)
$0.30/1M
Output Price (Auto)
$2.50/1M
Cache Read (Auto)
$0.030/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
20.6
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
17.8
Agentic Index
23.0
GPQA Diamond
Graduate-level scientific reasoning
68.3%
Better than 59% of models compared
HLE
Humanity's Last Exam
5.1%
Better than 43% of models compared
IFBench
Instruction-following benchmark
39.0%
Better than 43% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
14.9%
Better than 15% of models compared
AA-LCR
Long context reasoning evaluation
45.9%
Better than 64% of models compared
GDPval-AA
Economically valuable tasks
17.5%
Better than 74% of models compared
CritPt
Research-level physics reasoning
1.4%
Better than 88% of models compared
SciCode
Python programming for scientific computing
29.1%
Better than 49% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
12.1%
Better than 54% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
60.3%
Better than 57% of models compared
AIME
American Invitational Mathematics Examination
50.0%
Better than 70% of models compared
MMLU-Pro
Professional and academic subject knowledge
80.9%
Better than 72% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
26.7%
Better than 90% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
49.5%
Better than 56% of models compared
Math-500
Diverse mathematical problem solving benchmark
93.2%
Better than 72% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
88.3%
Better than 24% of models compared