Reasoning Router
Collection
Route between βthinkingβ and βno-thinkingβ modes for hybrid models like Qwen3. Blog: https://huggingface.co/blog/AmirMohseni/reasoning-router β’ 9 items β’ Updated
β’ 2
AmirMohseni/reasoning-router-0.6b is a fine-tuned reasoning router built on top of Qwen/Qwen3-0.6B. It classifies user prompts into two categories:
no_think β The task does not require explicit reasoning.think β The task benefits from a reasoning mode (e.g., math, multi-step analysis).This router is designed for hybrid model systems, where it decides whether to route prompts to lightweight inference endpoints or to reasoning-enabled models such as the Qwen3 series or deepseek-ai/DeepSeek-V3.1.
The reasoning router allows for efficient orchestration in model pipelines:
This approach helps reduce costs, latency, and unnecessary compute in real-world deployments.
from transformers import pipeline
# Initialize the router pipeline
router = pipeline(
"text-classification",
model="AmirMohseni/reasoning-router-0.6b",
device_map="auto"
)
# Example prompt that requires reasoning
prompt = "What is the sum of the first 100 prime numbers?"
results = router(prompt)[0]
print('Label: ', results['label']) # Label: no_think
print('Probability Score: ', results['score']) # Probability Score: 0.6192409992218018
This model was trained on the AmirMohseni/reasoning-router-data-v2 dataset, which was curated from multiple instruction-following datasets. The dataset primarily contains:
think, the training data is heavily skewed towards mathematical reasoning.Qwen/Qwen3-0.6Bno_think, think)