GLM-5.1 with extended reasoning mode for multi-step planning and tool-heavy workflows. Running inside a TEE (Trusted Execution Environment), with provider attestation support.
Added Apr 20, 2026
Context Window
202.8K
Max Output
65.5K
Input Price (Auto)
$1.50/1M
Output Price (Auto)
$5.25/1M
Cache Read (Auto)
$0.30/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
51.4
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
43.4
GPQA Diamond
Graduate-level scientific reasoning
86.8%
Better than 93% of models compared
HLE
Humanity's Last Exam
28.0%
Better than 93% of models compared
IFBench
Instruction-following benchmark
76.3%
Better than 96% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
97.7%
Better than 99% of models compared
AA-LCR
Long context reasoning evaluation
62.3%
Better than 82% of models compared
SciCode
Python programming for scientific computing
43.8%
Better than 90% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
43.2%
Better than 93% of models compared
Last updated May 11, 2026
Artificial Analysis