TheWhisper-Large-V3

Model Summary

TheWhisper-Large-V3 is a fine-tuned, high-performance variant of OpenAI’s Whisper Large V3 model β€” optimized by TheStage AI for real-time, low-latency, and low-power speech-to-text (ASR) inference across multiple platforms, including NVIDIA GPUs and Apple Silicon (CoreML).

It provides streaming transcription, word timestamps, and scalable performance for use cases like real-time captioning, meetings, and on-device voice interfaces.

πŸ“Š Quality Benchmarks

TheWhisper is a fine-tuned Whisper model that can process audio chunks of any size up to 30 seconds. Unlike the original Whisper models, it doesn't require padding audio with silence to reach 30 seconds. We conducted quality benchmarking across different chunk sizes: 10, 15, 20, and 30 seconds. For quality benchmarks, we used the multilingual benchmarks Open ASR Leaderboard.

vanilla whisper (1) TheStage AI Whisper (1)

10s chunks

Model Mean WER
openai/whisper-large-v3-turbo 7.81
openai/whisper-large-v3 7.45
thewhisper-large-v3-turbo 7.88
thewhisper-large-v3 7.8

15s chunks

Model Mean WER
openai/whisper-large-v3-turbo 7.61
openai/whisper-large-v3 7.22
thewhisper-large-v3-turbo 7.45
thewhisper-large-v3 7.34

20s chunks

Model Mean WER
openai/whisper-large-v3-turbo 7.63
openai/whisper-large-v3 7.29
thewhisper-large-v3-turbo 7.47
thewhisper-large-v3 7.31

30s chunks

Model Mean WER
openai/whisper-large-v3-turbo 7.61
openai/whisper-large-v3 7.32
thewhisper-large-v3-turbo 7.45
thewhisper-large-v3 7.28

Quick start


Apple Usage

import torch
from thestage_speechkit.apple import ASRPipeline

model = ASRPipeline(
    model='TheStageAI/thewhisper-large-v3',
    # optimized model with ANNA
    model_size='S'
    chunk_length_s=10,
    token=hf_token
)

# inference
result = model(
    "path_to_your_audio.wav", 
    max_batch_size=32,
    return_timestamps="word"
)

print(result["text"])

Apple Usage with Streaming

from thestage_speechkit.apple import WhisperStreamingPipeline
from thestage_speechkit.streaming import MicStream, FileStream, StdoutStream

streaming_pipe = WhisperStreaming(
    model='TheStageAI/thewhisper-large-v3',
    # Optimized model by ANNA
    model_size='S',
    # Window length
    chunk_length_s=10,
    platform='apple'
)

# set stride in miliseconds
mic_stream = MicStream(step_size_s=0.5)
output_stream = StdoutStream()

while True:
    chunk = mic_stream.next_chunk()
    if chunk:
        approved_text, assumption = streaming_pipe(chunk)
        output_stream.rewrite(approved_text, assumption)
    else:
        break

Nvidia Usage (HuggingFace Transfomers)

import torch
from thestage_speechkit.nvidia import ASRPipeline

model = ASRPipeline(
    model='TheStageAI/thewhisper-large-v3',
    # allowed: 10s, 15s, 20s, 30s
    chunk_length_s=10,
    # optimized TheStage AI engines
    device='cuda',
    token=hf_token
)

# inference
result = model(
    audio="path_to_your_audio.wav", 
    max_batch_size=32,
    return_timestamps="segment"
)

print(result["text"])

Nvidia Usage (TheStage AI engines)

import torch
from thestage_speechkit.nvidia import ASRPipeline

model = ASRPipeline(
    model='TheStageAI/thewhisper-large-v3',
    # allowed: 10s, 15s, 20s, 30s
    chunk_length_s=10,
    # optimized TheStage AI engines
    mode='S',
    device='cuda',
    token=hf_token
)

# inference
result = model(
    "path_to_your_audio.wav", 
    max_batch_size=32,
    return_timestamps="segment"
)

print(result["text"])

Model Details


  • Developed by: TheStage AI
  • Model type: Speech-to-Text (Automatic Speech Recognition)
  • Languages: Multilingual (same as Whisper Large V3: ~99 languages supported)
  • License: MIT
  • Finetuned from: openai/whisper-large-v3
  • Frameworks: PyTorch, CoreML
  • Supported Platforms:
    • NVIDIA GPUs (CUDA 11.8+)
    • Apple Silicon (M1–M4, macOS 15+)

Links


Downloads last month
216
Safetensors
Model size
2B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including TheStageAI/thewhisper-large-v3