Reasoning Router 0.6B
	
AmirMohseni/reasoning-router-0.6b is a fine-tuned reasoning router built on top of Qwen/Qwen3-0.6B. It classifies user prompts into two categories:
no_think β The task does not require explicit reasoning. 
think β The task benefits from a reasoning mode (e.g., math, multi-step analysis). 
This router is designed for hybrid model systems, where it decides whether to route prompts to lightweight inference endpoints or to reasoning-enabled models such as the Qwen3 series or deepseek-ai/DeepSeek-V3.1.
	
		
	
	
		Use Case
	
The reasoning router allows for efficient orchestration in model pipelines:
- Run cheap, fast inference for simple tasks.
 
- Switch to more powerful, expensive reasoning models only when needed.
 
This approach helps reduce costs, latency, and unnecessary compute in real-world deployments.
	
		
	
	
		π Quick Start
	
	
		
	
	
		Example Usage
	
from transformers import pipeline
router = pipeline(
    "text-classification",
    model="AmirMohseni/reasoning-router-0.6b",
    device_map="auto"
)
prompt = "What is the sum of the first 100 prime numbers?"
results = router(prompt)[0]
print('Label: ', results['label']) 
print('Probability Score: ', results['score']) 
	
		
	
	
		π Training Data
	
This model was trained on the AmirMohseni/reasoning-router-data-v2 dataset, which was curated from multiple instruction-following datasets. The dataset primarily contains:
- Math reasoning data β Derived from Big-Math-RL and AIME problems (1983β2024).
 
- General tasks β A mix of simple vs. reasoning-heavy queries to teach the model to distinguish between them.
 
	
		
	
	
		β οΈ Limitations
	
- Language Coverage: The model is trained primarily on English. Performance on other languages may be weaker.
 
- Reasoning Coverage: For tasks labeled 
think, the training data is heavily skewed towards mathematical reasoning. 
- No Coding Tasks: Programming or code-related reasoning tasks are not included in the current training data.
 
	
		
	
	
		π§ Model Details
	
- Base model: 
Qwen/Qwen3-0.6B 
- Parameters: 0.6B
 
- Task: Binary classification (
no_think, think) 
- Intended use: Routing prompts for hybrid reasoning pipelines.
 
	
		
	
	
		β
 Intended Use
	
- Routing user prompts in a multi-model reasoning system.
 
- Reducing compute costs by filtering out tasks that donβt require a dedicated reasoning model.