Delphermes-0.6B-R1-LORA

This is a merged LoRA model based on Qwen/Qwen3-0.6B, fine-tuned for language tasks.

Model Details

  • Base Model: Qwen/Qwen3-0.6B
  • Language: English (en)
  • Type: Merged LoRA model
  • Library: transformers

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_name = "justinj92/Delphermes-0.6B-R1-LORA"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.float16,
    device_map="auto"
)

# Example usage
text = "Hey"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Training Details

This model was created by merging a LoRA adapter trained for language understanding and generation.

Downloads last month
-
Safetensors
Model size
0.6B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for justinj92/Delphermes-0.6B-R1

Finetuned
Qwen/Qwen3-0.6B
Adapter
(125)
this model
Adapters
1 model