Upstage's Solar Pro 3 is a Mixture-of-Experts (MoE) language model with 102B total parameters and 12B active parameters per forward pass, optimized for Korean with strong English and Japanese support.
Added Mar 3, 2026
Context Window
128.0K
Max Output
128.0K
Input Price (Auto)
$0.15/1M
Output Price (Auto)
$0.60/1M
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
25.9
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
13.3
Agentic Index
34.9
GPQA Diamond
Graduate-level scientific reasoning
72.4%
Better than 66% of models compared
HLE
Humanity's Last Exam
10.1%
Better than 73% of models compared
IFBench
Instruction-following benchmark
71.2%
Better than 91% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
86.3%
Better than 82% of models compared
AA-LCR
Long context reasoning evaluation
27.0%
Better than 46% of models compared
GDPval-AA
Economically valuable tasks
8.8%
Better than 59% of models compared
CritPt
Research-level physics reasoning
0.0%
Better than 36% of models compared
SciCode
Python programming for scientific computing
24.7%
Better than 36% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
7.6%
Better than 45% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
18.1%
Better than 51% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
87.8%
Better than 27% of models compared
Last updated May 15, 2026
Artificial Analysis