Merge pull request #1 from evalstate/feature/trl-fixes
Browse files
trl/references/training_methods.md
CHANGED
|
@@ -24,11 +24,14 @@ trainer = SFTTrainer(
|
|
| 24 |
output_dir="my-model",
|
| 25 |
push_to_hub=True,
|
| 26 |
hub_model_id="username/my-model",
|
|
|
|
| 27 |
)
|
| 28 |
)
|
| 29 |
trainer.train()
|
| 30 |
```
|
| 31 |
|
|
|
|
|
|
|
| 32 |
**Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/sft_trainer")`
|
| 33 |
|
| 34 |
## Direct Preference Optimization (DPO)
|
|
@@ -52,11 +55,14 @@ trainer = DPOTrainer(
|
|
| 52 |
args=DPOConfig(
|
| 53 |
output_dir="dpo-model",
|
| 54 |
beta=0.1, # KL penalty coefficient
|
|
|
|
| 55 |
)
|
| 56 |
)
|
| 57 |
trainer.train()
|
| 58 |
```
|
| 59 |
|
|
|
|
|
|
|
| 60 |
**Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/dpo_trainer")`
|
| 61 |
|
| 62 |
## Group Relative Policy Optimization (GRPO)
|
|
|
|
| 24 |
output_dir="my-model",
|
| 25 |
push_to_hub=True,
|
| 26 |
hub_model_id="username/my-model",
|
| 27 |
+
eval_strategy="no", # Disable eval for simple example
|
| 28 |
)
|
| 29 |
)
|
| 30 |
trainer.train()
|
| 31 |
```
|
| 32 |
|
| 33 |
+
**Note:** For production training with evaluation monitoring, see `scripts/train_sft_example.py`
|
| 34 |
+
|
| 35 |
**Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/sft_trainer")`
|
| 36 |
|
| 37 |
## Direct Preference Optimization (DPO)
|
|
|
|
| 55 |
args=DPOConfig(
|
| 56 |
output_dir="dpo-model",
|
| 57 |
beta=0.1, # KL penalty coefficient
|
| 58 |
+
eval_strategy="no", # Disable eval for simple example
|
| 59 |
)
|
| 60 |
)
|
| 61 |
trainer.train()
|
| 62 |
```
|
| 63 |
|
| 64 |
+
**Note:** For production training with evaluation monitoring, see `scripts/train_dpo_example.py`
|
| 65 |
+
|
| 66 |
**Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/dpo_trainer")`
|
| 67 |
|
| 68 |
## Group Relative Policy Optimization (GRPO)
|