Qwen3 Vision‑Language model built on a 235B MoE backbone (≈22B active per token). Strong at OCR, charts/tables, multi‑image reasoning, and complex document understanding. The Thinking variant enables long‑form, chain‑of‑thought style reasoning.
Added Aug 26, 2025
Context Window
N/A
Max Output
32.8K
Input Price (Auto)
$0.50/1M
Output Price (Auto)
$6.00/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
27.6
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
20.9
GPQA Diamond
Graduate-level scientific reasoning
77.2%
Better than 77% of models compared
HLE
Humanity's Last Exam
10.1%
Better than 73% of models compared
IFBench
Instruction-following benchmark
56.5%
Better than 75% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
54.1%
Better than 63% of models compared
AA-LCR
Long context reasoning evaluation
58.7%
Better than 78% of models compared
SciCode
Python programming for scientific computing
39.9%
Better than 83% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
11.4%
Better than 53% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
88.3%
Better than 87% of models compared
MMLU-Pro
Professional and academic subject knowledge
83.6%
Better than 85% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
64.6%
Better than 71% of models compared