GLM-4.5 is Z-AI's latest flagship foundation model, purpose-built for agent-based applications. It leverages a Mixture-of-Experts (MoE) architecture with 355B total / 32B active parameters and supports a context length of up to 128k tokens. GLM-4.5 delivers significantly enhanced capabilities in reasoning, code generation, and agent alignment.
Added Apr 15, 2025
Context Window
128.0K
Max Output
65.5K
Input Price (Auto)
$0.30/1M
Output Price (Auto)
$1.30/1M
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
26.4
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
26.3
Agentic Index
16.2
GPQA Diamond
Graduate-level scientific reasoning
78.2%
Better than 79% of models compared
HLE
Humanity's Last Exam
12.2%
Better than 78% of models compared
IFBench
Instruction-following benchmark
44.1%
Better than 58% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
43.0%
Better than 56% of models compared
AA-LCR
Long context reasoning evaluation
48.3%
Better than 66% of models compared
GDPval-AA
Economically valuable tasks
0.1%
Better than 38% of models compared
CritPt
Research-level physics reasoning
0.0%
Better than 36% of models compared
SciCode
Python programming for scientific computing
34.8%
Better than 64% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
22.0%
Better than 69% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
73.7%
Better than 69% of models compared
AIME
American Invitational Mathematics Examination
87.3%
Better than 94% of models compared
MMLU-Pro
Professional and academic subject knowledge
83.5%
Better than 84% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
24.9%
Better than 84% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
73.8%
Better than 85% of models compared
Math-500
Diverse mathematical problem solving benchmark
97.9%
Better than 90% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
67.2%
Better than 81% of models compared