Introduction
XY-Tokenizer is a speech codec that simultaneously models both semantic and acoustic aspects of speech, converting audio into discrete tokens and decoding them back to high-quality audio. It achieves efficient speech representation at only 1kbps with RVQ8 quantization at 12.5Hz frame rate.
- Paper: Read on arXiv
- Source Code:
📚 Related Project: MOSS-TTSD
XY-Tokenizer serves as the underlying neural codec for MOSS-TTSD, our 1.7B Audio Language Model.
Explore MOSS-TTSD for advanced text-to-speech and other audio generation tasks on GitHub, Blog, 博客, and Space Demo.
✨ Features
- Dual-channel modeling: Simultaneously captures semantic meaning and acoustic details
- Efficient representation: 1kbps bitrate with RVQ8 quantization at 12.5Hz
- High-quality audio tokenization: Convert speech to discrete tokens and back with minimal quality loss
- Long audio support: Process audio files longer than 30 seconds using chunking with overlap
- Batch processing: Efficiently process multiple audio files in batches
- 24kHz output: Generate high-quality 24kHz audio output
🚀 Installation
git clone https://github.com/OpenMOSS/MOSS-TTSD.git
cd MOSS-TTSD
conda create -n xy_tokenizer python=3.10 -y && conda activate xy_tokenizer
pip install -r XY_Tokenizer/requirements.txt
💻 Quick Start
Here's how to use XY-Tokenizer with transformers to encode an audio file into discrete tokens and decode it back into a waveform.
import os
import torchaudio
from transformers import AutoModelForCausalLM
from transformers.models.moss_ttsd.processor_moss_ttsd import MossTTSDProcessor
processor = MossTTSDProcessor.from_pretrained(
"fnlp/MOSS-TTSD-v0.5",
codec_path="gaoyang07/XY_Tokenizer",
trust_remote_code=True
)
model = AutoModelForCausalLM.from_pretrained(
"fnlp/MOSS-TTSD-v0.5",
trust_remote_code=True
).eval()
data = [{
"base_path": "./examples",
"text": "[S1]单元009,你到底能不能好好工作?我劝你一句,这个时代,不跟上AI浪潮,就会被彻底淘汰![S2]这个嘛,那我得先问问硅基之主",
"system_prompt": "你是一个根据文本生成对应音频的语音合成器。",
"prompt_text": "[S1]嘎子,你听叔的,你听叔的,其实你跟所有人PK,有的时候我也在看,我也在看,无非两,两件事,一个是面子,不想输。[S2]你别说,那天潘老师有一个徒弟开直播,给我开专场,潘老师一徒弟开直播给我开专场,给我一顿骂。",
"prompt_audio": "panchangjiang_gazi.wav",
}]
# Try to use the ExtractorIterator as an iterator
print("Trying iterator approach...", flush=True)
inputs = processor(data)
token_ids = model.generate(input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"])
text, audios = processor.batch_decode(token_ids)
if not os.path.exists("outputs/"):
os.mkdir("outputs/")
for i, data in enumerate(audios):
for j, fragment in enumerate(data):
print(f"Saving audio_{i}_{j}.wav...", flush=True)
torchaudio.save(f"outputs/audio_{i}_{j}.wav", fragment.cpu(), 24000)
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support