Thinking version of the latest GLM series chat model with strong general performance. Quantized at FP8
Added Sep 29, 2025
Context Window
200.0K
Max Output
65.5K
Input Price (Auto)
$0.37/1M
Output Price (Auto)
$1.47/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
32.5
Choose explicit providers for this model. Auto routing remains available as the default option.
Loading provider options…
Coding Index
29.5
Agentic Index
41.6
GPQA Diamond
Graduate-level scientific reasoning
78.0%
Better than 78% of models compared
HLE
Humanity's Last Exam
13.3%
Better than 81% of models compared
IFBench
Instruction-following benchmark
43.4%
Better than 56% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
70.5%
Better than 70% of models compared
AA-LCR
Long context reasoning evaluation
54.3%
Better than 72% of models compared
GDPval-AA
Economically valuable tasks
26.5%
Better than 86% of models compared
CritPt
Research-level physics reasoning
1.1%
Better than 85% of models compared
SciCode
Python programming for scientific computing
38.4%
Better than 77% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
25.0%
Better than 74% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
86.0%
Better than 84% of models compared
MMLU-Pro
Professional and academic subject knowledge
82.9%
Better than 82% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
27.3%
Better than 91% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
69.5%
Better than 79% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
95.0%
Better than 6% of models compared