Kimi K2 Thinking. Kimi-k2 is a Mixture-of-Experts (MoE) foundation model with exceptional coding and agent capabilities, featuring 1 trillion total parameters and 32 billion activated parameters. In benchmark evaluations covering general knowledge reasoning, programming, mathematics, and agent-related tasks, the K2 model outperforms other leading open-source models. Quantized at FP8.
Added Nov 6, 2025
Context Window
256.0K
Max Output
262.1K
Input Price (Auto)
$0.31/1M
Output Price (Auto)
$1.26/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
40.9
Choose explicit providers for this model. Auto routing remains available as the default option.
Loading provider options…
Coding Index
34.8
GPQA Diamond
Graduate-level scientific reasoning
83.8%
Better than 89% of models compared
HLE
Humanity's Last Exam
22.3%
Better than 90% of models compared
IFBench
Instruction-following benchmark
68.1%
Better than 86% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
93.0%
Better than 91% of models compared
AA-LCR
Long context reasoning evaluation
66.3%
Better than 91% of models compared
SciCode
Python programming for scientific computing
42.4%
Better than 89% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
31.1%
Better than 80% of models compared
AIME 2025
American Invitational Mathematics Examination 2025
94.7%
Better than 96% of models compared
MMLU-Pro
Professional and academic subject knowledge
84.8%
Better than 90% of models compared
Last updated May 15, 2026
Artificial AnalysisLiveCodeBench
Contamination-free coding benchmark
85.3%
Better than 96% of models compared