GLM-4.7-Flash is a lightweight 30B model optimized for coding and agentic tasks. Balances high performance with efficiency, perfect for local deployment. Routed directly via Z-AI (Zhipu) subscription.
Added Jan 19, 2026
Context Window
200.0K
Max Output
128.0K
Input Price (Auto)
$0.073/1M
Output Price (Auto)
$0.42/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
22.1
Choose explicit providers for this model. Auto routing remains available as the default option.
Loading provider options…
Coding Index
11.0
GPQA Diamond
Graduate-level scientific reasoning
45.2%
Better than 27% of models compared
HLE
Humanity's Last Exam
4.9%
Better than 39% of models compared
IFBench
Instruction-following benchmark
46.3%
Better than 63% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
91.8%
Better than 89% of models compared
AA-LCR
Long context reasoning evaluation
14.7%
Better than 30% of models compared
SciCode
Python programming for scientific computing
25.5%
Better than 38% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
3.8%
Better than 28% of models compared
Last updated May 15, 2026
Artificial Analysis