Ultra-lightweight and fast variant of Gemini 2.5 Flash. Stable release optimized for speed and efficiency.
Added Jun 17, 2025
Context Window
1.0M
Max Output
65.5K
Input Price (Auto)
$0.10/1M
Output Price (Auto)
$0.40/1M
Cache Read (Auto)
$0.010/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
12.7
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
7.4
Agentic Index
10.1
GPQA Diamond
Graduate-level scientific reasoning
47.4%
Better than 31% of models compared
HLE
Humanity's Last Exam
3.7%
Better than 8% of models compared
IFBench
Instruction-following benchmark
31.5%
Better than 21% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
19.0%
Better than 20% of models compared
AA-LCR
Long context reasoning evaluation
31.3%
Better than 51% of models compared
GDPval-AA
Economically valuable tasks
0.0%
Better than 18% of models compared
CritPt
Research-level physics reasoning
0.0%
Better than 36% of models compared
SciCode
Python programming for scientific computing
17.7%
Better than 21% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
2.3%
Better than 22% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
35.3%
Better than 36% of models compared
AIME
American Invitational Mathematics Examination
50.0%
Better than 70% of models compared
MMLU-Pro
Professional and academic subject knowledge
72.4%
Better than 41% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
15.3%
Better than 29% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
40.0%
Better than 47% of models compared
Math-500
Diverse mathematical problem solving benchmark
92.6%
Better than 70% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
66.4%
Better than 85% of models compared