You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

ChatterboxTTS Hebrew Fine-tuned Model

This is a Hebrew fine-tuned version of ResembleAI/chatterbox for Hebrew text-to-speech synthesis.

Model Details

  • Base Model: ResembleAI/chatterbox
  • Language: Hebrew
  • Training Data: Hebrew phonemes dataset
  • Fine-tuning: T3 component only (voice encoder and S3Gen frozen)
  • Final Training Loss: 7.8869

Usage

from transformers import AutoModel
import torch

# Load the fine-tuned model
model = AutoModel.from_pretrained("MayBog/orpheus-hebrew-lora-v3")

# Use with ChatterboxTTS pipeline
# (Add specific usage instructions based on your implementation)

Training Details

  • Training Steps: 4373
  • Batch Size: 4 (per device)
  • Learning Rate: 5e-05
  • Gradient Accumulation: 2 steps
  • Warmup Steps: 100
  • Evaluation Strategy: Every 2000 steps

Model Architecture

This model uses the ChatterboxTTS architecture with:

  • Voice Encoder: Frozen (pre-trained weights)
  • S3Gen: Frozen (pre-trained weights)
  • T3: Fine-tuned on Hebrew data

Dataset

Trained on a Hebrew phonemes dataset with audio-phoneme pairs for Hebrew text-to-speech.

Limitations

  • Optimized specifically for Hebrew language
  • May not perform well on other languages
  • Fine-tuned only on T3 component

Citation

@misc{chatterbox-hebrew-finetuned,
    title={ChatterboxTTS Hebrew Fine-tuned Model},
    author={[Your Name]},
    year={2025},
    url={https://huggingface.co/MayBog/orpheus-hebrew-lora-v3}
}
Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support