Nvidia's latest Nemotron 3 Nano model with 30B total parameters (3B active) using hybrid Mamba-Transformer MoE architecture. Features excellent throughput and strong reasoning capabilities.
Added Mar 1, 2026
Context Window
256.0K
Max Output
262.1K
Input Price (Auto)
$0.17/1M
Output Price (Auto)
$0.68/1M
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
13.2
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
15.8
Agentic Index
8.5
GPQA Diamond
Graduate-level scientific reasoning
39.9%
Better than 21% of models compared
HLE
Humanity's Last Exam
4.6%
Better than 31% of models compared
IFBench
Instruction-following benchmark
37.5%
Better than 38% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
25.4%
Better than 35% of models compared
AA-LCR
Long context reasoning evaluation
6.7%
Better than 21% of models compared
GDPval-AA
Economically valuable tasks
0.0%
Better than 18% of models compared
CritPt
Research-level physics reasoning
0.0%
Better than 36% of models compared
SciCode
Python programming for scientific computing
23.0%
Better than 32% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
12.1%
Better than 54% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
13.3%
Better than 16% of models compared
MMLU-Pro
Professional and academic subject knowledge
57.9%
Better than 21% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
11.4%
Better than 18% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
36.0%
Better than 44% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
90.9%
Better than 13% of models compared