Meta's Llama 3.1 8B model on an open permissionless network
Context Window
128.0K
Max Output
16.4K
Input Price (Auto)
$0.020/1M
Output Price (Auto)
$0.030/1M
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
8.9
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
Coding Index
6.8
Agentic Index
5.5
GPQA Diamond
Graduate-level scientific reasoning
37.9%
Better than 19% of models compared
HLE
Humanity's Last Exam
4.4%
Better than 26% of models compared
IFBench
Instruction-following benchmark
37.1%
Better than 37% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
0.0%
Better than 3% of models compared
AA-LCR
Long context reasoning evaluation
0.0%
Better than 7% of models compared
GDPval-AA
Economically valuable tasks
0.0%
Better than 18% of models compared
CritPt
Research-level physics reasoning
0.0%
Better than 36% of models compared
SciCode
Python programming for scientific computing
18.9%
Better than 24% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
0.8%
Better than 13% of models compared
AIME
American Invitational Mathematics Examination
0.0%
Better than 3% of models compared
Math-500
Diverse mathematical problem solving benchmark
48.3%
Better than 12% of models compared
MMLU-Pro
Professional and academic subject knowledge
57.4%
Better than 20% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
8.1%
Better than 9% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
19.8%
Better than 21% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
41.8%
Better than 95% of models compared