Nvidia's Nemotron 3 Super 120B A12B model from the March 2026 Nemotron 3 release. It uses a hybrid Mamba-Transformer MoE architecture and targets agentic and coding workloads with a 262K context window here.
Added Mar 1, 2026
Context Window
262.1K
Max Output
16.4K
Input Price (Auto)
$0.050/1M
Output Price (Auto)
$0.25/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
36.0
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
31.2
Agentic Index
40.2
GPQA Diamond
Graduate-level scientific reasoning
80.0%
Better than 83% of models compared
HLE
Humanity's Last Exam
19.2%
Better than 88% of models compared
IFBench
Instruction-following benchmark
71.5%
Better than 91% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
67.8%
Better than 69% of models compared
AA-LCR
Long context reasoning evaluation
60.0%
Better than 81% of models compared
GDPval-AA
Economically valuable tasks
25.2%
Better than 83% of models compared
CritPt
Research-level physics reasoning
3.1%
Better than 94% of models compared
SciCode
Python programming for scientific computing
36.0%
Better than 68% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
28.8%
Better than 78% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
24.0%
Better than 82% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
87.0%
Better than 29% of models compared
Last updated May 15, 2026
Artificial Analysis