Kimi K2 0711 version. Kimi-k2 is a Mixture-of-Experts (MoE) foundation model with exceptional coding and agent capabilities, featuring 1 trillion total parameters and 32 billion activated parameters. In benchmark evaluations covering general knowledge reasoning, programming, mathematics, and agent-related tasks, the K2 model outperforms other leading open-source models. Quantized at FP8.
Context Window
128.0K
Max Output
8.2K
Input Price (Auto)
$0.42/1M
Output Price (Auto)
$1.89/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
26.3
Choose explicit providers for this model. Auto routing remains available as the default option.
Loading provider options…
Coding Index
22.1
GPQA Diamond
Graduate-level scientific reasoning
76.6%
Better than 75% of models compared
HLE
Humanity's Last Exam
7.0%
Better than 60% of models compared
IFBench
Instruction-following benchmark
41.5%
Better than 50% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
61.1%
Better than 65% of models compared
AA-LCR
Long context reasoning evaluation
51.0%
Better than 67% of models compared
SciCode
Python programming for scientific computing
34.5%
Better than 63% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
15.9%
Better than 60% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
57.0%
Better than 54% of models compared
AIME
American Invitational Mathematics Examination
69.3%
Better than 79% of models compared
MMLU-Pro
Professional and academic subject knowledge
82.4%
Better than 80% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
55.6%
Better than 62% of models compared
Math-500
Diverse mathematical problem solving benchmark
97.1%
Better than 87% of models compared