YAML Metadata
		Warning:
	empty or missing yaml metadata in repo card
	(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
object_counting LoRA Models
This repository contains LoRA (Low-Rank Adaptation) models trained on the object_counting dataset.
Models in this repository:
llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0005_data_size1000_max_steps=100_seed=123/: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0005_data_size1000_max_steps=100_seed=123llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr5e-05_data_size1000_max_steps=500_seed=123/: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr5e-05_data_size1000_max_steps=500_seed=123llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0003_data_size1000_max_steps=500_seed=123/: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0003_data_size1000_max_steps=500_seed=123llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0004_data_size1000_max_steps=500_seed=123/: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0004_data_size1000_max_steps=500_seed=123llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0004_data_size1000_max_steps=100_seed=123/: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0004_data_size1000_max_steps=100_seed=123llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0008_data_size1000_max_steps=100_seed=123/: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0008_data_size1000_max_steps=100_seed=123llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0002_data_size1000_max_steps=100_seed=123/: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0002_data_size1000_max_steps=100_seed=123llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0001_data_size1000_max_steps=500_seed=123/: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0001_data_size1000_max_steps=500_seed=123llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0008_data_size1000_max_steps=500_seed=123/: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0008_data_size1000_max_steps=500_seed=123llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0006_data_size1000_max_steps=500_seed=123/: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0006_data_size1000_max_steps=500_seed=123llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0006_data_size1000_max_steps=100_seed=123/: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0006_data_size1000_max_steps=100_seed=123llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0001_data_size1000_max_steps=100_seed=123/: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0001_data_size1000_max_steps=100_seed=123llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0005_data_size1000_max_steps=500_seed=123/: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0005_data_size1000_max_steps=500_seed=123llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0003_data_size1000_max_steps=100_seed=123/: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0003_data_size1000_max_steps=100_seed=123llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0002_data_size1000_max_steps=500_seed=123/: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0002_data_size1000_max_steps=500_seed=123
Usage
To use these LoRA models, you'll need the peft library:
pip install peft transformers torch
Example usage:
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load base model
base_model_name = "your-base-model"  # Replace with actual base model
model = AutoModelForCausalLM.from_pretrained(base_model_name)
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
# Load LoRA adapter
model = PeftModel.from_pretrained(
    model, 
    "supergoose/object_counting",
    subfolder="model_name_here"  # Replace with specific model folder
)
# Use the model
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs)
Training Details
- Dataset: object_counting
 - Training framework: LoRA/PEFT
 - Models included: 15 variants
 
Files Structure
Each model folder contains:
adapter_config.json: LoRA configurationadapter_model.safetensors: LoRA weightstokenizer.json: Tokenizer configuration- Additional training artifacts
 
Generated automatically by LoRA uploader script
	Inference Providers
	NEW
	
	
	This model isn't deployed by any Inference Provider.
	๐
			
		Ask for provider support