evalstate commited on
Commit
eb23fb4
·
1 Parent(s): 59eece6

remove kto/ppo

Browse files
Files changed (2) hide show
  1. trl/SKILL.md +2 -2
  2. trl/references/training_methods.md +0 -31
trl/SKILL.md CHANGED
@@ -193,7 +193,7 @@ TRL provides battle-tested scripts for all methods. Can be run from URLs:
193
 
194
  ```python
195
  hf_jobs("uv", {
196
- "script": "https://raw.githubusercontent.com/huggingface/trl/main/examples/scripts/sft.py",
197
  "script_args": [
198
  "--model_name_or_path", "Qwen/Qwen2.5-0.5B",
199
  "--dataset_name", "trl-lib/Capybara",
@@ -209,7 +209,7 @@ hf_jobs("uv", {
209
 
210
  **Benefits:** No code to write, maintained by TRL team, production-tested
211
  **When to use:** Standard TRL training, quick experiments, don't need custom code
212
- **Available:** sft.py, dpo.py, grpo.py, kto.py, reward.py, ppo.py - https://github.com/huggingface/trl/tree/main/examples/scripts
213
 
214
  ### Finding More UV Scripts on Hub
215
 
 
193
 
194
  ```python
195
  hf_jobs("uv", {
196
+ "script": "https://github.com/huggingface/trl/blob/main/trl/scripts/sft.py",
197
  "script_args": [
198
  "--model_name_or_path", "Qwen/Qwen2.5-0.5B",
199
  "--dataset_name", "trl-lib/Capybara",
 
209
 
210
  **Benefits:** No code to write, maintained by TRL team, production-tested
211
  **When to use:** Standard TRL training, quick experiments, don't need custom code
212
+ **Available:** Scripts are available from https://github.com/huggingface/trl/tree/main/examples/scripts
213
 
214
  ### Finding More UV Scripts on Hub
215
 
trl/references/training_methods.md CHANGED
@@ -94,19 +94,6 @@ hf_jobs("uv", {
94
 
95
  **Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/grpo_trainer")`
96
 
97
- ## Kahneman-Tversky Optimization (KTO)
98
-
99
- **What it is:** Preference tuning without paired data - uses independent positive/negative examples.
100
-
101
- **When to use:**
102
- - Have preference data but not paired comparisons
103
- - Simpler data collection than DPO
104
- - Want to incorporate human feedback without explicit pairs
105
-
106
- **Dataset format:** Examples with binary labels (desirable/undesirable) but not paired
107
-
108
- **Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/kto_trainer")`
109
-
110
  ## Reward Modeling
111
 
112
  **What it is:** Train a reward model to score responses, used as a component in RLHF pipelines.
@@ -120,21 +107,6 @@ hf_jobs("uv", {
120
 
121
  **Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/reward_trainer")`
122
 
123
- ## Proximal Policy Optimization (PPO)
124
-
125
- **What it is:** Classic RLHF method using a reward model to guide policy optimization.
126
-
127
- **When to use:**
128
- - Full RLHF pipeline
129
- - Have trained reward model
130
- - Need fine-grained control over optimization
131
-
132
- **Requirements:** Pre-trained reward model
133
-
134
- **Note:** PPO is more complex than DPO. For most use cases, start with DPO.
135
-
136
- **Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/ppo_trainer")`
137
-
138
  ## Method Selection Guide
139
 
140
  | Method | Complexity | Data Required | Use Case |
@@ -142,9 +114,7 @@ hf_jobs("uv", {
142
  | **SFT** | Low | Demonstrations | Initial fine-tuning |
143
  | **DPO** | Medium | Paired preferences | Post-SFT alignment |
144
  | **GRPO** | Medium | Prompts + reward fn | Online RL with automatic rewards |
145
- | **KTO** | Medium | Unpaired preferences | Alignment with simpler data |
146
  | **Reward** | Medium | Paired preferences | Building RLHF pipeline |
147
- | **PPO** | High | Demonstrations + reward model | Full RLHF |
148
 
149
  ## Recommended Pipeline
150
 
@@ -156,7 +126,6 @@ hf_jobs("uv", {
156
  **For advanced RL scenarios:**
157
  1. **Start with SFT** - Fine-tune base model
158
  2. **Train reward model** - On preference data
159
- 3. **Apply GRPO or PPO** - Online RL with reward model
160
 
161
  ## Dataset Format Reference
162
 
 
94
 
95
  **Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/grpo_trainer")`
96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
  ## Reward Modeling
98
 
99
  **What it is:** Train a reward model to score responses, used as a component in RLHF pipelines.
 
107
 
108
  **Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/reward_trainer")`
109
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
110
  ## Method Selection Guide
111
 
112
  | Method | Complexity | Data Required | Use Case |
 
114
  | **SFT** | Low | Demonstrations | Initial fine-tuning |
115
  | **DPO** | Medium | Paired preferences | Post-SFT alignment |
116
  | **GRPO** | Medium | Prompts + reward fn | Online RL with automatic rewards |
 
117
  | **Reward** | Medium | Paired preferences | Building RLHF pipeline |
 
118
 
119
  ## Recommended Pipeline
120
 
 
126
  **For advanced RL scenarios:**
127
  1. **Start with SFT** - Fine-tune base model
128
  2. **Train reward model** - On preference data
 
129
 
130
  ## Dataset Format Reference
131