Thinking-enabled GLM 4.5V that surfaces structured reasoning before its final answer. Great for image-grounded analysis, OCR, charts, and deliberate step-by-step responses.
Added Nov 22, 2025
Context Window
64.0K
Max Output
96.0K
Input Price (Auto)
$0.60/1M
Output Price (Auto)
$1.80/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
15.1
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
10.9
Agentic Index
10.9
GPQA Diamond
Graduate-level scientific reasoning
68.4%
Better than 59% of models compared
HLE
Humanity's Last Exam
5.9%
Better than 53% of models compared
IFBench
Instruction-following benchmark
34.2%
Better than 29% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
22.5%
Better than 29% of models compared
AA-LCR
Long context reasoning evaluation
0.0%
Better than 7% of models compared
GDPval-AA
Economically valuable tasks
0.5%
Better than 39% of models compared
CritPt
Research-level physics reasoning
0.0%
Better than 36% of models compared
SciCode
Python programming for scientific computing
22.1%
Better than 28% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
5.3%
Better than 36% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
73.0%
Better than 68% of models compared
MMLU-Pro
Professional and academic subject knowledge
78.8%
Better than 62% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
20.6%
Better than 66% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
60.4%
Better than 66% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
80.9%
Better than 55% of models compared