Command-R 35B โ€” SFT (Supervised Fine-Tuning on Synthetic QA)

Model type: Causal Language Model
Base model: CohereLabs/c4ai-command-r-v01
License: Apache 2.0
Framework: Axolotl


Overview

commandr-35b-sft is a supervised fine-tuned variant of Cohereโ€™s Command-R 35B model.
Fine-tuning was performed on a high-quality instruction-following dataset using LoRA adapters, enabling improved conversational reasoning and question answering.

Training was conducted on the Leonardo EuroHPC system.


Training Setup

Objective: Supervised fine-tuning (instruction following)
Adapter type: LoRA
Precision: bfloat16
Hardware: 8 nodes ร— 2 ร— NVIDIA A100 64GB GPUs
Framework: DeepSpeed ZeRO-1, Axolotl, PyTorch 2.5.1+cu121
Runtime: ~6 hours
Dataset split: 70% train / 30% validation


Dataset

Name: axolotl_deduplicated_synthetic_qa.jsonl
Type: Instruction-following synthetic QA dataset

Each sample follows a QA/chat format used in the alpaca_chat.load_qa schema.


Hyperparameters

Parameter Value
Sequence length 2048
Micro batch size 1
Gradient accumulation 2
Epochs 1
Learning rate 0.0001
LR scheduler cosine
Optimizer AdamW (8-bit)
Warmup steps 20
Weight decay 0.0
LoRA rank (r) 16
LoRA alpha 32
LoRA dropout 0.05
LoRA target modules q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
Gradient checkpointing โœ…
Flash attention โœ…
Auto resume โœ…
Loss watchdog threshold 8.0
Loss watchdog patience 20

Tokenizer

Tokenizer type: AutoTokenizer
Special token: <|end_of_text|> as pad_token

Downloads last month
40
Safetensors
Model size
35B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ubitech-edg/commandr-35b-sft

Adapter
(2)
this model