SFT fine tuning
When performing a fine-tuning job with batch size of 4 and max steps of 1000, it errors out with some tokenizer error
The tokenizer has new PAD/BOS/EOS tokens that differ from the model config and generation config. The model config and generation config were aligned accordingly, being updated with the tokenizer's values. Updated tokens: {'bos_token_id': None, 'pad_token_id': None}.
0%| | 0/500 [00:00<?, ?it/s]Traceback (most recent call last):
File "/tmp/script.py", line 191, in <module>
trainer.train()
File "/root/.cache/uv/environments-v2/script-912247c0edd68a55/lib/python3.12/site-packages/transformers/trainer.py", line 2328, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/root/.cache/uv/environments-v2/script-912247c0edd68a55/lib/python3.12/site-packages/transformers/trainer.py", line 2672, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.cache/uv/environments-v2/script-912247c0edd68a55/lib/python3.12/site-packages/trl/trainer/sft_trainer.py", line 1189, in training_step
return super().training_step(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.cache/uv/environments-v2/script-912247c0edd68a55/lib/python3.12/site-packages/transformers/trainer.py", line 4009, in training_step
loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.cache/uv/environments-v2/script-912247c0edd68a55/lib/python3.12/site-packages/trl/trainer/sft_trainer.py", line 1123, in compute_loss
entropy = torch.sum(per_token_entropy * attention_mask) / attention_mask.sum()
~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~
RuntimeError: The size of tensor a (4) must match the size of tensor b (8) at non-singleton dimension 0
0%| | 0/500 [00:01<?, ?it/s]
Please what might be wrong?
i get similar warning and my inference with pipline on trained model not finished.
What i do wrong)) PS i use other model but still
which training dataset are you using and is push to hub true or false.
I encountered the same issue when training with the s1t_1.1_think split of the SFT subset. the issue is padding and tensor shape mismatch related for the tokenizer.
You will notice the issue doesn't always occur for some subset of the dataset so it errored when a particular split has a different dimension, presumably.
The tokenizer has new PAD/BOS/EOS tokens that differ from the model config and generation config.
You should have a way to handle the tokenizer padding for bos/eos (beginning and end of sequence) in the code to overcome the issue.
I am a newbie. I follow guides for conversations:
103 β messages = [
104 β {
105 β "role": "user",
106 β "content": "question",
107 β },
108 β {"role": "assistant", "content": "answer" }
109 β ]
SFTTrainer accepted that format
No more default chat_template, so one would have to explicitly define the chat_template function then preprocessing the data, using the dataset