Qwen3-30B-A3B-Instruct-2507 is a MoE causal language model featuring 30.5B total parameters and 3.3B active parameters. Running inside a TEE (Trusted Execution Environment), with provider attestation support.
Context Window
262.0K
Max Output
32.8K
Input Price (Auto)
$0.15/1M
Output Price (Auto)
$0.45/1M
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
25.0
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
22.1
GPQA Diamond
Graduate-level scientific reasoning
75.3%
Better than 72% of models compared
HLE
Humanity's Last Exam
10.6%
Better than 74% of models compared
IFBench
Instruction-following benchmark
46.1%
Better than 62% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
33.3%
Better than 48% of models compared
AA-LCR
Long context reasoning evaluation
31.2%
Better than 51% of models compared
SciCode
Python programming for scientific computing
36.0%
Better than 68% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
15.2%
Better than 59% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
71.7%
Better than 66% of models compared
AIME
American Invitational Mathematics Examination
71.7%
Better than 81% of models compared
MMLU-Pro
Professional and academic subject knowledge
82.8%
Better than 82% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
52.4%
Better than 59% of models compared
Math-500
Diverse mathematical problem solving benchmark
98.0%
Better than 91% of models compared