w2v-bert-2.0-nchlt-gpu

This model is a fine-tuned version of facebook/w2v-bert-2.0 on the sample of multilingual NCHLT dataset. It achieves the following results on the evaluation set:

  • Loss: 26.3390
  • Wer: 0.3546

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 200
  • training_steps: 2000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
192.7357 0.0669 200 112.5876 1.3313
46.1394 0.1338 400 49.4396 0.5885
39.6691 0.2008 600 40.5696 0.5211
34.8581 0.2677 800 36.9507 0.4672
28.864 0.3346 1000 32.2841 0.4224
29.1541 0.4015 1200 30.3475 0.4116
27.1871 0.4685 1400 28.8041 0.3870
24.8545 0.5354 1600 27.8487 0.3708
22.4881 0.6023 1800 27.0128 0.3594
25.343 0.6692 2000 26.3390 0.3546

Framework versions

  • Transformers 4.52.0
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.4
Downloads last month
11
Safetensors
Model size
0.6B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for dsfsi/w2v-bert-2.0-nchlt

Finetuned
(368)
this model