Model Overview
Description:
The NVIDIA Kimi-K2-Thinking Eagle model is the Eagle head of the Moonshot AI's Kimi-K2-Thinking model, which is an auto-regressive language model that uses an optimized transformer architecture. For more information, please check here. The NVIDIA Kimi-K2-Thinking Eagle3 model incorporates Eagle speculative decoding with Model Optimizer.
This model is ready for commercial/non-commercial use.
License/Terms of Use:
Use of these model weights is governed by the NVIDIA-Open-Model-License. ADDITIONAL INFORMATION: Kimi-K2-Thinking Modified MIT License. Kimi K2.
Deployment Geography:
Global
Use Case:
Developers designing AI Agent systems, chatbots, RAG systems, and other AI-powered applications. Also suitable for typical instruction-following tasks.
Release Date:
Hugging Face [03/4/2025] via [https://huggingface.co/nvidia/Kimi-K2-Thinking-Eagle3]
Reference(s):
- Introducing Kimi K2 Thinking
- EAGLE-3: Scaling up Inference Acceleration of Large Language Models via Training-Time Test
Model Architecture:
Architecture Type: Transformers
Network Architecture: DeepSeek V3
*This model was developed based on moonshotai/Kimi-K2-Thinking.
** Number of model parameters 1.810^9
Input:
Input Type(s): Text
Input Format(s): String
Input Parameters: 1D (One Dimensional): Sequences
Other Properties Related to Input: Context length 4096
Output:
Output Type(s): Text
Output Format: String
Output Parameters: 1D (One Dimensional): Sequences
Other Properties Related to Output: None
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
Software Integration:
Runtime Engine(s):
- TensorRT-LLM
Supported Hardware Microarchitecture Compatibility:
- NVIDIA Blackwell
Preferred Operating System(s):
- Linux
The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
Model Version(s):
** The model (v1) is trained with nvidia-modelopt v0.42.0
Training and Evaluation Datasets:
** Total size (in number of data points) 48K.
** Dataset partition: Training 100%
Training Dataset:
Link: Nemotron-Post-Training-Dataset-v2, only prompts from the datasets were used for data synthesis, (the original responses from GPT were not used), which is then used to train the Eagle modules.
** Data Modality [Text]
** Data Collection Method by dataset [Automated]
** Labeling Method by dataset [Automated]
** Text Training Data Size [Less than a Billion Tokens]
** Properties: 48K samples, synthetic text data.
Evaluation Dataset:
Link: MTBench, for more details, see here
** Data Collection Method by dataset
Hybrid: Human, Synthetic ** Labeling Method by dataset
Hybrid: Human, Synthetic Properties: 3,300 multi-turn dialogue sequences, each annotated with expert preference votes.
Inference:
Acceleration Engine: TensorRT-LLM
Test Hardware: B200
Eagle Speculative Decoding
Synthesized data was obtained from Moonshot AI's Kimi-K2-Thinking model, which is then used to finetune the Eagle modules. This model is ready for inference with TensorRT-LLM in Eagle speculative decoding mode. Eagle modules are used to predict candidate tokens beyond the next token. In the generation step, each forward Eagle module generates a distribution of tokens beyond the previous. The longest accepted candidate sequence is selected so that more than 1 token is returned in the generation step. The number of tokens generated in each step is called acceptance rate.
Usage
To serve the checkpoint with TensorRT-LLM, follow the sample commands below with the TensorRT-LLM GitHub repo:
trtllm-serve <Kimi-K2-Thinking-NVFP4 checkpoint> --host 0.0.0.0 --port 8000 --backend pytorch --max_batch_size 32 --max_num_tokens 8192 --max_seq_len 8192 --tp_size 4 --extra_llm_api_options extra-llm-api-config.yml
with extra-llm-api-config.yml being
speculative_config:
decoding_type: Eagle
max_draft_len: 3
speculative_model_dir: <eagle3 checkpoint>
Evaluation
The Eagle acceptance rate benchmark results (MT-Bench) with draft length 3 are presented in the table below for medium reasoning:
| Category | MT Bench Acceptance Rate |
|---|---|
| writing | 2.282 |
| roleplay | 2.181 |
| reasoning | 2.862 |
| math | 3.340 |
| coding | 2.76 |
| extraction | 3.104 |
| stem | 2.343 |
| humanities | 2.135 |
Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards below.
Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.
SUBCARDS:
Explainability
| Field | Response |
|---|---|
| Intended Task/Domain: | Text generation, reasoning, summarization, and question answering. |
| Model Type: | Text and Image-to-text transformer |
| Intended Users: | This model is intended for developers, researchers, and customers building/utilizing LLMs, while balancing accuracy and efficiency. |
| Output: | Text String(s) |
| Describe how the model works: | Generates text by predicting the next word or token based on the context provided in the input sequence using multiple self-attention layers. |
| Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: | Not Applicable |
| Technical Limitations & Mitigation: | The model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. Therefore, before deploying any applications of this model, developers should perform safety testing and tuning tailored to their specific applications of the model. |
| Verified to have met prescribed NVIDIA quality standards: | Yes |
| Performance Metrics: | Accuracy, Throughput, and user-side throughput |
| Potential Known Risk | The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. |
| Licensing: | Use of this model is governed by the NVIDIA Open Model License. ADDITIONAL INFORMATION: Kimi-K2-Thinking Modified MIT License. Built with Kimi-K2-Thinking. |
Bias
| Field | Response |
|---|---|
| Participation considerations from adversely impacted groups protected classes in model design and testing: | None |
| Measures taken to mitigate against unwanted bias | None |
Safety & Security
| Field | Response |
|---|---|
| Model Application Field(s): | Chat, Instruction Following, Chatbot Development, Code Generation, Reasoning |
| Describe the life critical impact (if present) | Not Applicable |
| Use Case Restrictions: | Abide by the NVIDIA Open Model License. ADDITIONAL INFORMATION: Kimi-K2-Thinking Modified MIT License. Built with Kimi-K2-Thinking. |
| Model and Dataset Restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog. |
Privacy
| Field | Response |
|---|---|
| Generatable or reverse engineerable personal data? | No |
| Was consent obtained for any personal data used? | Not Applicable |
| Personal data used to create this model? | None Known |
| How often is dataset reviewed? | Before Release |
| Was data from user interactions with the AI model (e.g. user input and prompts) used to train the model? | No |
| Is there provenance for all datasets used in training? | Yes |
| Does data labeling (annotation, metadata) comply with privacy laws? | Yes |
| Is data compliant with data subject requests for data correction or removal, if such a request was made? | No, not possible with externally-sourced data. |
| Applicable NVIDIA Privacy Policy | https://www.nvidia.com/en-us/about-nvidia/privacy-policy/ |
- Downloads last month
- 30
Model tree for nvidia/Kimi-K2-Thinking-Eagle3
Base model
moonshotai/Kimi-K2-Thinking