GLM-4.5-Air is a 106B total / 12B active parameter model designed to unify frontier reasoning, coding, and agentic capabilities. On the SWE-bench Verified benchmark, it delivers the best performance at its scale with a competitive performance-to-cost ratio.
Added Apr 15, 2025
Context Window
128.0K
Max Output
98.3K
Input Price (Auto)
$0.12/1M
Output Price (Auto)
$0.80/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
23.2
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
23.8
Agentic Index
21.0
GPQA Diamond
Graduate-level scientific reasoning
73.3%
Better than 68% of models compared
HLE
Humanity's Last Exam
6.8%
Better than 59% of models compared
IFBench
Instruction-following benchmark
37.6%
Better than 38% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
46.5%
Better than 58% of models compared
AA-LCR
Long context reasoning evaluation
43.7%
Better than 61% of models compared
GDPval-AA
Economically valuable tasks
3.0%
Better than 45% of models compared
CritPt
Research-level physics reasoning
0.0%
Better than 36% of models compared
SciCode
Python programming for scientific computing
30.6%
Better than 54% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
20.5%
Better than 67% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
80.7%
Better than 77% of models compared
AIME
American Invitational Mathematics Examination
67.3%
Better than 77% of models compared
MMLU-Pro
Professional and academic subject knowledge
81.5%
Better than 75% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
15.5%
Better than 32% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
68.4%
Better than 76% of models compared
Math-500
Diverse mathematical problem solving benchmark
96.5%
Better than 84% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
92.3%
Better than 11% of models compared