Fast variant of GLM 4.6 for general chat, coding, and analysis with improved latency and strong reasoning.
Added Oct 2, 2025
Context Window
200.0K
Max Output
204.8K
Input Price (Auto)
$1.00/1M
Output Price (Auto)
$3.00/1M
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
30.2
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
30.2
GPQA Diamond
Graduate-level scientific reasoning
63.2%
Better than 52% of models compared
HLE
Humanity's Last Exam
5.2%
Better than 47% of models compared
IFBench
Instruction-following benchmark
36.7%
Better than 36% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
76.9%
Better than 75% of models compared
AA-LCR
Long context reasoning evaluation
26.3%
Better than 45% of models compared
SciCode
Python programming for scientific computing
33.1%
Better than 59% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
28.8%
Better than 78% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
44.3%
Better than 45% of models compared
MMLU-Pro
Professional and academic subject knowledge
78.4%
Better than 60% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
56.1%
Better than 63% of models compared