Qwen 3.5's open-source 397B MoE model (17B active params) with hybrid linear attention and extended reasoning. Supports text, image, and video input with a 256K context window.
Added Feb 16, 2026
Context Window
258.0K
Max Output
65.5K
Input Price (Auto)
$0.41/1M
Output Price (Auto)
$2.46/1M
Cache Read (Auto)
$0.20/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
45.0
Choose explicit providers for this model. Auto routing remains available as the default option.
Loading provider options…
Coding Index
41.3
GPQA Diamond
Graduate-level scientific reasoning
89.3%
Better than 97% of models compared
HLE
Humanity's Last Exam
27.3%
Better than 94% of models compared
IFBench
Instruction-following benchmark
78.8%
Better than 98% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
95.6%
Better than 96% of models compared
AA-LCR
Long context reasoning evaluation
65.7%
Better than 89% of models compared
SciCode
Python programming for scientific computing
42.0%
Better than 88% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
40.9%
Better than 92% of models compared
Last updated May 15, 2026
Artificial Analysis