Canis.teach - Qwen3-4B Instruct (Language)
LoRA adapters for the Language tutor in the Canis.teach suite.
- Base Model: Qwen/Qwen3-4B-Instruct-2507
 - Release: CanisAI/teach-language-qwen3-4b-2507-r1
 - Project: Canis.teach - Learning that fits.
 - Subject: Language
 
What is this?
This repository provides LoRA adapters fine-tuned on Language tutoring dialogues. Apply these adapters to the base model to enable subject-aware, didactic behavior without downloading a full merged checkpoint.
The model is designed to teach, not just answer - providing step-by-step explanations, hints, and pedagogically structured responses.
For ready-to-run merged models or Ollama-friendly GGUF quantizations, see the "Related Models" section.
Quick Start
Installation
pip install transformers peft torch
Usage (LoRA)
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base = "Qwen/Qwen3-4B-Instruct-2507"
adapter = "CanisAI/teach-language-qwen3-4b-2507-r1"
tokenizer = AutoTokenizer.from_pretrained(base, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
    base, 
    device_map="auto",
    torch_dtype="auto"
)
model = PeftModel.from_pretrained(model, adapter)
# Example prompt
prompt = "Improve this sentence for clarity while keeping the tone: 'Communication is just saying things.'"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
    **inputs,
    max_new_tokens=256,
    temperature=0.7,
    top_p=0.8,
    top_k=20,
    do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
- Base Model: Qwen/Qwen3-4B-Instruct-2507
 - Training Method: Supervised Fine-Tuning (SFT) with LoRA
 - Framework: Unsloth + TRL/PEFT
 - Data: Canis.lab-curated Language tutoring dialogues
 - Target Modules: Query, Key, Value, Output projections
 - Rank: 16
 - Alpha: 32
 
Intended Use
- Primary: Subject-aware tutoring for Language education
 - Applications: Educational prototypes, tutoring systems, research
 - Approach: Stepwise explanations, pedagogical hints, rubric-aligned responses
 - Target Audience: Students, educators, researchers
 
Model Behavior
The model is optimized for:
- Clear, step-by-step explanations
 - Appropriate difficulty progression
 - Encouraging learning through hints rather than direct answers
 - Subject-specific pedagogical approaches
 - Maintaining educational standards and accuracy
 
Recommended Settings
For optimal tutoring behavior:
- Temperature: 0.6-0.8
 - Top-p: 0.8-0.9
 - Top-k: 20-40
 - Max tokens: 256-512 (depending on complexity)
 
Safety and Limitations
Important Considerations:
- Human oversight required for educational use
 - May occasionally hallucinate or oversimplify complex topics
 - For fact-critical applications, consider RAG with verified curriculum sources
 - Follow your institution's data privacy and AI usage policies
 - Not a replacement for qualified human instruction
 
Related Models
| Type | Repository | Description | 
|---|---|---|
| LoRA Adapters | CanisAI/teach-language-qwen3-4b-2507-r1 | 
This repository (lightweight) | 
| Merged Model | (Coming Soon) | Ready-to-use full model | 
| GGUF Quantized | (Coming Soon) | Ollama/llama.cpp compatible | 
| Dataset | CanisAI/teach-language-v1 | 
Training data | 
License
This model inherits the license from the base model (Qwen/Qwen3-4B-Instruct-2507). Please review the base model's license terms before use.
Citation
@misc{canis-teach-language,
  title={Canis.teach Language Tutor},
  author={CanisAI},
  year={2025},
  publisher={Hugging Face},
  howpublished={\url{https://huggingface.co/CanisAI/teach-language-qwen3-4b-2507-r1}}
}
Acknowledgments
- Qwen Team for the excellent base model
 - Unsloth for efficient training tools
 - Hugging Face ecosystem (Transformers, PEFT, TRL)
 - Educators and contributors supporting the Canis.teach project
 
Canis.teach - Learning that fits.
- Downloads last month
 - 5
 
Model tree for CanisAI/teach-language-qwen3-4b-2507-r1
Base model
Qwen/Qwen3-4B-Instruct-2507