Llama 3.3 is optimized for multilingual dialogue use cases and outperforms many of the available open source and closed chat models on common industry benchmarks.
Added Feb 27, 2025
Context Window
131.1K
Max Output
16.4K
Input Price (Auto)
$0.053/1M
Output Price (Auto)
$0.24/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
14.5
Choose explicit providers for this model. Auto routing remains available as the default option.
Loading provider options…
Coding Index
10.7
Agentic Index
9.1
GPQA Diamond
Graduate-level scientific reasoning
49.8%
Better than 33% of models compared
HLE
Humanity's Last Exam
4.0%
Better than 15% of models compared
IFBench
Instruction-following benchmark
47.1%
Better than 64% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
26.6%
Better than 37% of models compared
AA-LCR
Long context reasoning evaluation
15.0%
Better than 31% of models compared
GDPval-AA
Economically valuable tasks
0.0%
Better than 18% of models compared
CritPt
Research-level physics reasoning
0.0%
Better than 36% of models compared
SciCode
Python programming for scientific computing
26.0%
Better than 39% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
3.0%
Better than 26% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
7.7%
Better than 12% of models compared
AIME
American Invitational Mathematics Examination
30.0%
Better than 57% of models compared
MMLU-Pro
Professional and academic subject knowledge
71.3%
Better than 39% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
17.9%
Better than 49% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
28.8%
Better than 32% of models compared
Math-500
Diverse mathematical problem solving benchmark
77.3%
Better than 42% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
85.1%
Better than 40% of models compared