GLM-4.7-Flash is a lightweight 30B model optimized for coding and agentic tasks. Running inside a TEE (Trusted Execution Environment), with provider attestation support.
Context Window
203.0K
Max Output
65.5K
Input Price (Auto)
$0.15/1M
Output Price (Auto)
$0.50/1M
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
22.1
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
11.0
GPQA Diamond
Graduate-level scientific reasoning
45.2%
Better than 27% of models compared
HLE
Humanity's Last Exam
4.9%
Better than 39% of models compared
IFBench
Instruction-following benchmark
46.3%
Better than 63% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
91.8%
Better than 89% of models compared
AA-LCR
Long context reasoning evaluation
14.7%
Better than 30% of models compared
SciCode
Python programming for scientific computing
25.5%
Better than 38% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
3.8%
Better than 28% of models compared
Last updated May 15, 2026
Artificial Analysis