PCMind-2.1-Kaiyuan-2B (脑海-2.1-开元-2B)

License

PCMind-2.1-Kaiyuan-2B is a cutting-edge, fully open-source language model (i.e., open dataset) trained on a Ascend 910A cluster. With 1.4B non-embedding parameters and training on 2.2 trillion tokens, it achieves performance competitive with current state-of-the-art fully open models and even rivals some leading open-weight models of similar scale.

Model Performance Comparison

We will publish the datasets used to train Kaiyuan-2B soon.

Introduction

Our data preprocessing and pre-training pipeline is designed for enhanced training efficiency and model quality, achieved through several key innovations:

  1. Dataset Quality Benchmarking: A quantile benchmarking approach applied to major open-source pretraining datasets (e.g., DCLM Baseline, Fineweb-Edu) reveals their quality distributions via small-scale training runs, informing better data selection.

  2. Multi-Phase Pre-Training: The training progresses through 5 phases, strategically increasing the ratio of reasoning-intensive and knowledge-intensive samples while selectively repeating high-quality data portions.

  3. Multi-Domain Curriculum Learning: We keep a stable data mixture across different datasets while ordering samples within each dataset by ascending quality. This curriculum is further leveraged through accommodated learning rate decay and model averaging.

  4. High-Performance Data Preprocessing: We built an open-source, Spark-based framework optimized with Chukonu, delivering exceptional efficiency for large-scale deduplication and sorting tasks.

  5. Architecture for Training Stability: Optimized for training on Ascend 910A clusters (FP16 precision, similar to V100), the Kaiyuan-2B architecture integrates QK norm, sandwich norm, and soft-capping techniques to ensure stable and robust pre-training.

Usage

The model architecture is similar to Qwen/Qwen3-1.7B, and can be easily loaded by libraries like transformers.

Please use demo.py as an example.

Note: This is a pretrained base model only and has not undergone fine-tuning, reinforcement learning (RL), or any other post-training procedures. It is not ready for direct conversation. Users are recommended to employ few-shot prompting to guide model outputs, or to fine-tune the model for specific downstream applications.

Citation

Our technical report is coming soon!

License

The code and model weights of Kaiyuan-2B are licensed under Apache-2.0 License with the following copyright notice.

Copyright 2025 Tsinghua University & Peng Cheng Laboratory

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Downloads last month
113
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for thu-pacman/PCMind-2.1-Kaiyuan-2B

Finetunes
1 model