Co-rewarding
Collection
Co-rewarding is a novel self-supervised RL framework that improves training stability by seeking complementary supervision from another views. • 75 items • Updated
• 1
This is the Qwen3-8B-Base model trained by Self-Certainty Maximization using the OpenRS training set, as presented in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.
For more details about the Co-rewarding framework, datasets, and training, please refer to our GitHub repository [https://github.com/tmlr-group/Co-rewarding].