Kimi K2 0905. Kimi-k2 is a Mixture-of-Experts (MoE) foundation model with exceptional coding and agent capabilities, featuring 1 trillion total parameters and 32 billion activated parameters. In benchmark evaluations covering general knowledge reasoning, programming, mathematics, and agent-related tasks, the K2 model outperforms other leading open-source models. Quantized at FP8.
Added Sep 25, 2025
Context Window
256.0K
Max Output
262.1K
Input Price (Auto)
$0.42/1M
Output Price (Auto)
$1.89/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
30.9
Choose explicit providers for this model. Auto routing remains available as the default option.
Loading provider options…
Coding Index
25.9
Agentic Index
37.7
GPQA Diamond
Graduate-level scientific reasoning
76.7%
Better than 75% of models compared
HLE
Humanity's Last Exam
6.3%
Better than 56% of models compared
IFBench
Instruction-following benchmark
41.7%
Better than 51% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
73.4%
Better than 72% of models compared
AA-LCR
Long context reasoning evaluation
52.3%
Better than 69% of models compared
GDPval-AA
Economically valuable tasks
18.2%
Better than 77% of models compared
CritPt
Research-level physics reasoning
0.0%
Better than 36% of models compared
SciCode
Python programming for scientific computing
30.7%
Better than 54% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
23.5%
Better than 71% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
57.3%
Better than 54% of models compared
MMLU-Pro
Professional and academic subject knowledge
81.9%
Better than 77% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
25.4%
Better than 88% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
61.0%
Better than 67% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
69.6%
Better than 78% of models compared