Google's Gemma 3 27B instruction-tuned model. running inside a TEE (Trusted Execution Environment), with provider attestation support.
Context Window
131.1K
Max Output
8.2K
Input Price (Auto)
$0.20/1M
Output Price (Auto)
$0.80/1M
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
10.3
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
9.6
GPQA Diamond
Graduate-level scientific reasoning
42.8%
Better than 25% of models compared
HLE
Humanity's Last Exam
4.7%
Better than 34% of models compared
IFBench
Instruction-following benchmark
31.8%
Better than 22% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
10.5%
Better than 9% of models compared
AA-LCR
Long context reasoning evaluation
5.7%
Better than 19% of models compared
SciCode
Python programming for scientific computing
21.2%
Better than 27% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
3.8%
Better than 28% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
20.7%
Better than 22% of models compared
AIME
American Invitational Mathematics Examination
25.3%
Better than 52% of models compared
MMLU-Pro
Professional and academic subject knowledge
66.9%
Better than 29% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
13.7%
Better than 13% of models compared
Math-500
Diverse mathematical problem solving benchmark
88.3%
Better than 59% of models compared