Mixtral 8x7B is a high-quality sparse Mixture of Experts (MoE) model with 45B total parameters but only 13B active per token. Excels at text generation, summarization, question answering, and code generation. Supports English, French, German, Spanish, and Italian. Apache 2.0 licensed.
Added Dec 11, 2025
Context Window
32.8K
Max Output
32.8K
Input Price (Auto)
$0.27/1M
Output Price (Auto)
$0.27/1M
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
7.7
Auto routing is available for this model. Explicit provider selection is not available.
Loading provider options…
GPQA Diamond
Graduate-level scientific reasoning
29.2%
Better than 7% of models compared
HLE
Humanity's Last Exam
4.5%
Better than 29% of models compared
SciCode
Python programming for scientific computing
2.8%
Better than 3% of models compared
LiveCodeBench
Contamination-free coding benchmark
6.6%
Better than 4% of models compared
AIME
American Invitational Mathematics Examination
0.0%
Better than 3% of models compared
Math-500
Diverse mathematical problem solving benchmark
29.9%
Better than 5% of models compared
MMLU-Pro
Professional and academic subject knowledge
38.7%
Better than 7% of models compared
Last updated May 15, 2026
Artificial Analysis