evalstate commited on
Commit
1685a6c
·
1 Parent(s): a8a2531

Clean repository: keep only trl skill

Browse files
COMPARISON.md DELETED
@@ -1,235 +0,0 @@
1
- # Training Script Comparison
2
-
3
- ## What Makes the Production Script "Clean"?
4
-
5
- ### ❌ Original Script Issues
6
-
7
- ```python
8
- # 1. Double Trackio initialization
9
- trackio.init(project="qwen-demo-sft", space_id="evalstate/trackio-demo")
10
- # ... later ...
11
- SFTConfig(report_to="trackio") # TRL initializes AGAIN!
12
- # Result: 2 spaces created!
13
-
14
- # 2. Unclear what's customizable
15
- model="Qwen/Qwen2.5-0.5B" # Hard to find/change
16
- dataset = load_dataset("trl-lib/Capybara") # Buried in code
17
-
18
- # 3. Mixed concerns
19
- trackio.init(...)
20
- dataset = load_dataset(...)
21
- print(f"Sample: {dataset[0]}") # Debugging mixed with setup
22
- config = SFTConfig(...)
23
- ```
24
-
25
- ### ✅ Clean Production Script
26
-
27
- ```python
28
- # 1. Single Trackio init
29
- trackio.init(...)
30
- SFTConfig(report_to="trackio") # Uses existing connection
31
- # Result: 1 space!
32
-
33
- # 2. Clear configuration section at top
34
- # ============================================================================
35
- # CONFIGURATION - Customize via environment variables
36
- # ============================================================================
37
- MODEL = os.getenv("MODEL", "Qwen/Qwen2.5-0.5B")
38
- DATASET = os.getenv("DATASET", "trl-lib/Capybara")
39
- # All customizable params grouped together with comments
40
-
41
- # 3. Separated concerns
42
- # Setup
43
- if USE_TRACKIO:
44
- trackio.init(...)
45
-
46
- # Load data
47
- dataset = load_dataset(...)
48
-
49
- # Train
50
- trainer = SFTTrainer(...)
51
- trainer.train()
52
- ```
53
-
54
- ## Key Improvements
55
-
56
- ### 1. Documentation Structure
57
-
58
- ```python
59
- """
60
- CUSTOMIZABLE PARAMETERS (via environment variables):
61
- MODEL - Model to fine-tune (default: Qwen/Qwen2.5-0.5B)
62
- DATASET - Dataset name on Hub (default: trl-lib/Capybara)
63
- ...
64
-
65
- EXAMPLE USAGE:
66
- hf_jobs("uv", {
67
- "env": {"MODEL": "meta-llama/Llama-3.2-1B"}
68
- })
69
- """
70
- ```
71
-
72
- **Benefits:**
73
- - ✅ Clear what can be changed
74
- - ✅ Shows how to change it
75
- - ✅ Includes examples
76
- - ✅ Visible at file top
77
-
78
- ### 2. Three-Tier Configuration
79
-
80
- ```
81
- ┌─────────────────────────────────────┐
82
- │ TIER 1: Environment Variables │
83
- │ ✅ Change freely without editing │
84
- │ MODEL, DATASET, MAX_STEPS, etc. │
85
- ├─────────────────────────────────────┤
86
- │ TIER 2: Fixed Constants │
87
- │ ⚙️ Edit if needed (advanced) │
88
- │ LORA_R, GRADIENT_ACCUMULATION │
89
- ├─────────────────────────────────────┤
90
- │ TIER 3: Training Logic │
91
- │ 🔒 Don't modify │
92
- │ Trainer initialization, etc. │
93
- └─────────────────────────────────────┘
94
- ```
95
-
96
- **Benefits:**
97
- - ✅ Clear what to change vs what to leave alone
98
- - ✅ Beginners change env vars only
99
- - ✅ Advanced users can modify Tier 2
100
- - ✅ Tier 3 is implementation detail
101
-
102
- ### 3. Inline Documentation
103
-
104
- ```python
105
- # Model Selection
106
- MODEL = os.getenv("MODEL", "Qwen/Qwen2.5-0.5B")
107
- # Common options:
108
- # - Qwen/Qwen2.5-0.5B (fast, demo)
109
- # - Qwen/Qwen2.5-3B (production)
110
- # - meta-llama/Llama-3.2-1B
111
- # - HuggingFaceTB/SmolLM2-1.7B
112
- ```
113
-
114
- **Benefits:**
115
- - ✅ Context right where you need it
116
- - ✅ Shows concrete examples
117
- - ✅ Explains when to use each option
118
-
119
- ### 4. Clear Output
120
-
121
- ```python
122
- print("="*80)
123
- print("🚀 TRAINING CONFIGURATION")
124
- print("="*80)
125
- print(f"Model: {MODEL}")
126
- print(f"Dataset: {DATASET}")
127
- ...
128
- ```
129
-
130
- **Benefits:**
131
- - ✅ Easy to verify settings before training
132
- - ✅ Appears in job logs
133
- - ✅ Helpful for debugging
134
- - ✅ Professional appearance
135
-
136
- ### 5. Single Monitoring Space
137
-
138
- ```python
139
- # All projects use same space
140
- TRACKIO_SPACE = "evalstate/ml-experiments"
141
-
142
- # Different project names for filtering
143
- project=OUTPUT_REPO.split('/')[-1] # e.g., "qwen-demo"
144
- ```
145
-
146
- **Benefits:**
147
- - ✅ One dashboard for all experiments
148
- - ✅ Easy comparison across projects
149
- - ✅ Filter by project name
150
- - ✅ No space proliferation
151
-
152
- ## Usage Comparison
153
-
154
- ### Original (Hard to Customize)
155
-
156
- ```python
157
- # To change model, must edit script:
158
- # 1. Download script
159
- # 2. Edit: model="new-model"
160
- # 3. Upload to Hub
161
- # 4. Submit job
162
-
163
- hf_jobs("uv", {
164
- "script": "train.py", # Your modified version
165
- "flavor": "t4-small"
166
- })
167
- ```
168
-
169
- ### Production (Easy to Customize)
170
-
171
- ```python
172
- # Just pass env vars:
173
- hf_jobs("uv", {
174
- "script": "train_production_documented.py", # Same script!
175
- "flavor": "a10g-large",
176
- "env": {
177
- "MODEL": "meta-llama/Llama-3.2-1B",
178
- "MAX_STEPS": "100"
179
- }
180
- })
181
- ```
182
-
183
- **No script editing needed!**
184
-
185
- ## File Summary
186
-
187
- We created three files:
188
-
189
- 1. **`train_production_documented.py`** - The main script
190
- - 150 lines (well-commented)
191
- - Clear three-tier structure
192
- - Inline documentation
193
- - Ready to use
194
-
195
- 2. **`TRAINING_GUIDE.md`** - Usage guide
196
- - Quick start examples
197
- - Parameter reference
198
- - Troubleshooting
199
- - Best practices
200
-
201
- 3. **`COMPARISON.md`** (this file) - Design rationale
202
- - Why changes were made
203
- - Before/after comparison
204
- - Benefits explained
205
-
206
- ## When to Use Each Approach
207
-
208
- | Approach | When to Use |
209
- |----------|-------------|
210
- | **Minimal** | Learning, one-off tests |
211
- | **Production** | Reusable experiments, multiple runs |
212
- | **Organized** | Team projects, complex workflows |
213
-
214
- ## Next Steps
215
-
216
- 1. **Upload to Hub:**
217
- ```bash
218
- hf upload evalstate/demo-training-scripts train_production_documented.py
219
- hf upload evalstate/demo-training-scripts TRAINING_GUIDE.md
220
- ```
221
-
222
- 2. **Run it:**
223
- ```python
224
- hf_jobs("uv", {
225
- "script": "https://huggingface.co/evalstate/demo-training-scripts/resolve/main/train_production_documented.py",
226
- "flavor": "t4-small",
227
- "timeout": "20m",
228
- "secrets": {"HF_TOKEN": "$HF_TOKEN"}
229
- })
230
- ```
231
-
232
- 3. **Customize it:**
233
- ```python
234
- "env": {"MODEL": "your-model", "MAX_STEPS": "50"}
235
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SCRIPTS_COMPARISON.md DELETED
@@ -1,199 +0,0 @@
1
- # Scripts Comparison: Current Directory vs TRL Skill
2
-
3
- ## Two Different Locations
4
-
5
- ### 1. Current Directory Scripts (What we just created)
6
- **Location:** `/home/ssmith/source/training/.claude/skills/`
7
-
8
- These are the training scripts we just created during this session:
9
-
10
- | Script | Lines | Purpose |
11
- |--------|-------|---------|
12
- | `train_minimal.py` | 25 | Learning/demo - bare minimum |
13
- | `train_production.py` | 75 | Reusable - env vars, no docs |
14
- | `train_production_documented.py` | 150 | Reusable - env vars + extensive docs ⭐ |
15
- | `train_organized.py` | 140 | Enterprise - functions, class-based |
16
- | `train_demo.py` | 80 | Original demo (has Trackio bug) |
17
-
18
- ### 2. TRL Skill Scripts (Pre-existing)
19
- **Location:** `/home/ssmith/source/training/.claude/skills/trl/scripts/`
20
-
21
- These are part of the TRL skill and were already there:
22
-
23
- | Script | Lines | Purpose |
24
- |--------|-------|---------|
25
- | `train_sft_example.py` | 111 | SFT training example (from skill) |
26
- | `train_dpo_example.py` | 98 | DPO training example (from skill) |
27
- | `train_grpo_example.py` | 97 | GRPO training example (from skill) |
28
- | `validate_dataset.py` | 175 | Dataset validation utility |
29
- | `estimate_cost.py` | 149 | Cost estimation utility |
30
- | `convert_to_gguf.py` | 301 | GGUF conversion utility |
31
-
32
- ## Key Differences
33
-
34
- ### Purpose
35
-
36
- **Current Directory (our new scripts):**
37
- - Created during THIS conversation
38
- - Demonstrate different levels of documentation
39
- - Show progression from simple → documented → organized
40
- - Teaching examples for "clean code"
41
-
42
- **TRL Skill Scripts:**
43
- - Part of the pre-existing TRL skill
44
- - Production examples for different training methods
45
- - Utility scripts for common tasks
46
- - Reference implementations
47
-
48
- ### Focus
49
-
50
- **Current Directory:**
51
- ```
52
- train_minimal.py → "How simple can we go?"
53
- train_production.py → "Add configurability"
54
- train_production_documented.py → "Add documentation" ⭐
55
- train_organized.py → "Add structure"
56
- ```
57
- *Shows evolution of code quality*
58
-
59
- **TRL Skill Scripts:**
60
- ```
61
- train_sft_example.py → "How to do SFT"
62
- train_dpo_example.py → "How to do DPO"
63
- train_grpo_example.py → "How to do GRPO"
64
- validate_dataset.py → "Utility: validate format"
65
- estimate_cost.py → "Utility: estimate cost"
66
- ```
67
- *Shows different training methods + utilities*
68
-
69
- ## Detailed Comparison: Our Scripts vs TRL Skill's train_sft_example.py
70
-
71
- ### trl/scripts/train_sft_example.py (111 lines)
72
- ```python
73
- # From the TRL skill - complete production example
74
- import trackio
75
- from datasets import load_dataset
76
- from peft import LoraConfig
77
- from trl import SFTTrainer, SFTConfig
78
-
79
- # Initialize Trackio
80
- trackio.init(
81
- project="qwen-capybara-sft",
82
- space_id="username/my-trackio-dashboard",
83
- config={...}
84
- )
85
-
86
- # Load dataset
87
- dataset = load_dataset("trl-lib/Capybara", split="train")
88
-
89
- # Configure training
90
- config = SFTConfig(
91
- output_dir="qwen-capybara-sft",
92
- push_to_hub=True,
93
- hub_model_id="username/qwen-capybara-sft",
94
- num_train_epochs=3,
95
- per_device_train_batch_size=4,
96
- # ... many parameters explicitly set
97
- )
98
-
99
- # Train
100
- trainer = SFTTrainer(...)
101
- trainer.train()
102
- trainer.push_to_hub()
103
- ```
104
-
105
- **Characteristics:**
106
- - ✅ Complete production example
107
- - ✅ Includes Trackio (single init)
108
- - ✅ Full dataset (not limited to 50)
109
- - ✅ Many parameters explicitly set
110
- - ❌ NOT configurable via env vars
111
- - ❌ Hard-coded values (must edit to change)
112
- - ❌ Minimal inline documentation
113
-
114
- ### Our train_production_documented.py (150 lines)
115
- ```python
116
- # Our new script - configurable + documented
117
- """
118
- CUSTOMIZABLE PARAMETERS (via environment variables):
119
- MODEL - Model to fine-tune (default: Qwen/Qwen2.5-0.5B)
120
- DATASET - Dataset name on Hub (default: trl-lib/Capybara)
121
- ...
122
- """
123
-
124
- import os
125
- from datasets import load_dataset
126
- from peft import LoraConfig
127
- from trl import SFTTrainer, SFTConfig
128
-
129
- # CONFIGURATION - Customize via environment variables
130
- MODEL = os.getenv("MODEL", "Qwen/Qwen2.5-0.5B")
131
- # Common options:
132
- # - Qwen/Qwen2.5-0.5B (fast, demo)
133
- # - Qwen/Qwen2.5-3B (production)
134
-
135
- DATASET = os.getenv("DATASET", "trl-lib/Capybara")
136
- MAX_STEPS = int(os.getenv("MAX_STEPS", "20"))
137
- USE_TRACKIO = os.getenv("USE_TRACKIO", "true").lower() == "true"
138
-
139
- # ... rest of training code
140
- ```
141
-
142
- **Characteristics:**
143
- - ✅ Environment-based configuration
144
- - ✅ Extensive inline documentation
145
- - ✅ Examples in comments
146
- - ✅ Reusable without editing
147
- - ✅ Clear sections
148
- - ✅ Optional Trackio
149
- - ⚠️ Smaller dataset (50 examples) for demo
150
-
151
- ## When to Use What?
152
-
153
- ### Use TRL Skill Scripts When:
154
- - ✅ You want complete production examples
155
- - ✅ You need reference implementations
156
- - ✅ You want to see different training methods (SFT, DPO, GRPO)
157
- - ✅ You need utility scripts (validation, cost estimation)
158
- - ✅ You're okay editing the script to customize
159
-
160
- ### Use Our New Scripts When:
161
- - ✅ You want quick demos (train_minimal.py)
162
- - ✅ You need reusable scripts (train_production.py)
163
- - ✅ You want self-documenting code (train_production_documented.py)
164
- - ✅ You need to run many experiments with different settings
165
- - ✅ You want to customize via environment variables
166
- - ✅ You're sharing with team and want clear documentation
167
-
168
- ## Relationship
169
-
170
- ```
171
- TRL Skill Scripts (Reference)
172
-
173
- └─ train_sft_example.py (111 lines, production example)
174
-
175
- Our New Scripts (Teaching Progression)
176
-
177
- ├─ train_minimal.py (25 lines, bare minimum)
178
- ├─ train_production.py (75 lines, configurable)
179
- ├─ train_production_documented.py (150 lines, configurable + docs) ⭐
180
- └─ train_organized.py (140 lines, structured)
181
- ```
182
-
183
- ## Summary
184
-
185
- **TRL Skill Scripts:**
186
- - Part of the TRL skill infrastructure
187
- - Reference implementations
188
- - Show "what to do"
189
- - Must edit to customize
190
-
191
- **Our New Scripts:**
192
- - Created during this conversation
193
- - Teaching progression (simple → documented → organized)
194
- - Show "how to organize code"
195
- - Customize via environment variables
196
-
197
- **They complement each other!**
198
- - TRL scripts show the different training methods
199
- - Our scripts show different code organization styles
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SCRIPT_COMPARISON.md DELETED
@@ -1,336 +0,0 @@
1
- # Training Scripts Comparison
2
-
3
- ## All Scripts in Directory
4
-
5
- | Script | Lines | Purpose | Best For |
6
- |--------|-------|---------|----------|
7
- | **train_minimal.py** | ~25 | Absolute bare minimum | Learning, throw-away tests |
8
- | **train_production.py** | ~75 | Env-configurable, clean | Reusable experiments |
9
- | **train_production_documented.py** | ~150 | Same as production + docs | Sharing with others |
10
- | **train_organized.py** | ~140 | Functions, class-based | Complex team projects |
11
- | **train_demo.py** | ~80 | Original demo script | Initial demo (has issues) |
12
-
13
- ## Quick Comparison
14
-
15
- ### 1. train_minimal.py (25 lines)
16
- ```python
17
- # Simplest possible
18
- from trl import SFTTrainer, SFTConfig
19
-
20
- dataset = load_dataset("trl-lib/Capybara", split="train[:50]")
21
-
22
- trainer = SFTTrainer(
23
- model="Qwen/Qwen2.5-0.5B",
24
- train_dataset=dataset,
25
- args=SFTConfig(
26
- output_dir="output",
27
- push_to_hub=True,
28
- hub_model_id="evalstate/qwen-demo-minimal",
29
- max_steps=20,
30
- report_to="none", # No monitoring
31
- )
32
- )
33
- trainer.train()
34
- ```
35
-
36
- **Pros:**
37
- - ✅ Super simple
38
- - ✅ Easy to understand
39
- - ✅ Fast execution (no monitoring overhead)
40
-
41
- **Cons:**
42
- - ❌ Hard to customize (must edit code)
43
- - ❌ No monitoring
44
- - ❌ No environment variables
45
-
46
- **Use when:**
47
- - Learning TRL basics
48
- - One-time quick test
49
- - Don't care about metrics
50
-
51
- ---
52
-
53
- ### 2. train_production.py (75 lines)
54
- ```python
55
- # Environment-configurable
56
- MODEL = os.getenv("MODEL", "Qwen/Qwen2.5-0.5B")
57
- DATASET = os.getenv("DATASET", "trl-lib/Capybara")
58
- MAX_STEPS = int(os.getenv("MAX_STEPS", "20"))
59
- USE_TRACKIO = os.getenv("USE_TRACKIO", "true").lower() == "true"
60
-
61
- if USE_TRACKIO:
62
- trackio.init(
63
- project=OUTPUT_REPO.split('/')[-1],
64
- space_id="evalstate/ml-experiments",
65
- config={...}
66
- )
67
-
68
- trainer = SFTTrainer(...)
69
- trainer.train()
70
- ```
71
-
72
- **Pros:**
73
- - ✅ Reusable (change via env vars)
74
- - ✅ Trackio monitoring
75
- - ✅ Clean structure
76
- - ✅ Production-ready
77
-
78
- **Cons:**
79
- - ❌ Less documentation in code
80
- - ❌ Need to know which env vars exist
81
-
82
- **Use when:**
83
- - Running multiple experiments
84
- - Need to compare different models/datasets
85
- - Want monitoring
86
-
87
- ---
88
-
89
- ### 3. train_production_documented.py (150 lines) ⭐ RECOMMENDED
90
- ```python
91
- """
92
- CUSTOMIZABLE PARAMETERS (via environment variables):
93
- MODEL - Model to fine-tune (default: Qwen/Qwen2.5-0.5B)
94
- DATASET - Dataset name on Hub (default: trl-lib/Capybara)
95
- OUTPUT_REPO - Where to save model (default: evalstate/qwen-capybara-sft)
96
- MAX_STEPS - Training steps (default: 20)
97
- BATCH_SIZE - Batch size per device (default: 2)
98
- LEARNING_RATE - Learning rate (default: 2e-5)
99
- USE_TRACKIO - Enable monitoring (default: true)
100
- """
101
-
102
- # Model Selection
103
- MODEL = os.getenv("MODEL", "Qwen/Qwen2.5-0.5B")
104
- # Common options:
105
- # - Qwen/Qwen2.5-0.5B (fast, demo)
106
- # - Qwen/Qwen2.5-3B (production)
107
- # - meta-llama/Llama-3.2-1B
108
-
109
- # Dataset Selection
110
- DATASET = os.getenv("DATASET", "trl-lib/Capybara")
111
- # Use any conversational dataset with "messages" field
112
-
113
- # Training Parameters
114
- MAX_STEPS = int(os.getenv("MAX_STEPS", "20"))
115
- # Quick demo: 10-20 | Development: 100-500 | Production: 1000+
116
-
117
- # ... rest of training code (same as train_production.py)
118
- ```
119
-
120
- **Pros:**
121
- - ✅ Same functionality as train_production.py
122
- - ✅ Self-documenting
123
- - ✅ Shows examples in comments
124
- - ✅ Easy for new users
125
- - ✅ Professional looking
126
-
127
- **Cons:**
128
- - ❌ More lines (but mostly comments)
129
-
130
- **Use when:**
131
- - Sharing with team
132
- - Need documentation
133
- - Want examples inline
134
- - **⭐ Default choice for most cases**
135
-
136
- ---
137
-
138
- ### 4. train_organized.py (140 lines)
139
- ```python
140
- class Config:
141
- """Training configuration with environment overrides"""
142
- MODEL = os.getenv("MODEL", "Qwen/Qwen2.5-0.5B")
143
- DATASET = os.getenv("DATASET", "trl-lib/Capybara")
144
- # ... all config in one place
145
-
146
- def setup_monitoring(config: Config):
147
- """Initialize Trackio for experiment tracking"""
148
- ...
149
-
150
- def load_and_validate_dataset(config: Config):
151
- """Load dataset and perform basic validation"""
152
- ...
153
-
154
- def main():
155
- config = Config()
156
- setup_monitoring(config)
157
- dataset = load_and_validate_dataset(config)
158
- trainer = train(dataset)
159
- ...
160
- ```
161
-
162
- **Pros:**
163
- - ✅ Cleanest separation of concerns
164
- - ✅ Easy to test individual functions
165
- - ✅ Best for complex workflows
166
- - ✅ Team-friendly structure
167
-
168
- **Cons:**
169
- - ❌ More complex
170
- - ❌ Overkill for simple training
171
-
172
- **Use when:**
173
- - Team project
174
- - Need to extend/customize heavily
175
- - Want to unit test components
176
- - Complex training pipeline
177
-
178
- ---
179
-
180
- ### 5. train_demo.py (80 lines) - Original Demo
181
- ```python
182
- # Has the double Trackio issue!
183
- trackio.init(
184
- project="qwen-demo-sft",
185
- space_id="evalstate/trackio-demo", # Creates space 1
186
- ...
187
- )
188
-
189
- config = SFTConfig(
190
- report_to="trackio", # TRL creates space 2!
191
- ...
192
- )
193
- ```
194
-
195
- **Issues:**
196
- - ❌ Creates 2 Trackio spaces
197
- - ❌ Not configurable
198
- - ❌ Mixed concerns
199
-
200
- **This was our original script that we improved!**
201
-
202
- ---
203
-
204
- ## Decision Matrix
205
-
206
- ### Choose Based On Your Needs:
207
-
208
- ```
209
- Need simplicity above all?
210
- → train_minimal.py
211
-
212
- Running ONE experiment?
213
- → train_minimal.py or train_production.py
214
-
215
- Running MULTIPLE experiments?
216
- → train_production_documented.py ⭐
217
-
218
- Sharing with others?
219
- → train_production_documented.py ⭐
220
-
221
- Complex team project?
222
- → train_organized.py
223
-
224
- Just learning?
225
- → train_minimal.py
226
- ```
227
-
228
- ## Key Differences
229
-
230
- ### Customization Method
231
-
232
- | Script | How to Customize |
233
- |--------|------------------|
234
- | train_minimal.py | Edit code |
235
- | train_production.py | Environment variables |
236
- | train_production_documented.py | Environment variables (with docs) |
237
- | train_organized.py | Environment variables or Config class |
238
-
239
- ### Monitoring
240
-
241
- | Script | Monitoring |
242
- |--------|-----------|
243
- | train_minimal.py | None |
244
- | train_production.py | Trackio (optional) |
245
- | train_production_documented.py | Trackio (optional) |
246
- | train_organized.py | Trackio (optional) |
247
- | train_demo.py | Trackio (with bug!) |
248
-
249
- ### Lines of Code
250
-
251
- | Script | Lines | Actual Code | Comments/Docs |
252
- |--------|-------|-------------|---------------|
253
- | train_minimal.py | 25 | 20 | 5 |
254
- | train_production.py | 75 | 60 | 15 |
255
- | train_production_documented.py | 150 | 60 | 90 |
256
- | train_organized.py | 140 | 100 | 40 |
257
-
258
- **Note:** train_production.py and train_production_documented.py have the SAME actual code, just different amounts of documentation!
259
-
260
- ## The Winner: train_production_documented.py ⭐
261
-
262
- ### Why It's the Best Choice:
263
-
264
- 1. **Self-documenting** - Everything explained inline
265
- 2. **Easy to customize** - Just pass env vars
266
- 3. **Shows examples** - Comments show real options
267
- 4. **Professional** - Clean output formatting
268
- 5. **Single Trackio space** - No duplication
269
- 6. **Reusable** - Same script for all experiments
270
-
271
- ### What Makes It Different from train_production.py?
272
-
273
- **Same code, more documentation!**
274
-
275
- ```python
276
- # train_production.py (minimal comments)
277
- MODEL = os.getenv("MODEL", "Qwen/Qwen2.5-0.5B")
278
-
279
- # train_production_documented.py (helpful comments)
280
- MODEL = os.getenv("MODEL", "Qwen/Qwen2.5-0.5B")
281
- # Common options:
282
- # - Qwen/Qwen2.5-0.5B (fast, demo)
283
- # - Qwen/Qwen2.5-3B (production)
284
- # - meta-llama/Llama-3.2-1B
285
- ```
286
-
287
- The actual execution is identical, but the documented version is much easier to understand and use!
288
-
289
- ## Usage Examples
290
-
291
- ### Minimal
292
- ```python
293
- # Must edit script to change model
294
- hf_jobs("uv", {"script": "train_minimal.py", "flavor": "t4-small"})
295
- ```
296
-
297
- ### Production (Either Version)
298
- ```python
299
- # Change model via environment
300
- hf_jobs("uv", {
301
- "script": "train_production_documented.py",
302
- "flavor": "a10g-large",
303
- "env": {
304
- "MODEL": "meta-llama/Llama-3.2-1B",
305
- "MAX_STEPS": "100"
306
- }
307
- })
308
- ```
309
-
310
- ### Organized
311
- ```python
312
- # Same as production, just different internal structure
313
- hf_jobs("uv", {
314
- "script": "train_organized.py",
315
- "flavor": "a10g-large",
316
- "env": {"MODEL": "meta-llama/Llama-3.2-1B"}
317
- })
318
- ```
319
-
320
- ## Recommendation
321
-
322
- **Use `train_production_documented.py` for:**
323
- - ✅ All production work
324
- - ✅ Sharing with teammates
325
- - ✅ Multiple experiments
326
- - ✅ When you want documentation
327
-
328
- **Use `train_minimal.py` for:**
329
- - ✅ Quick learning
330
- - ✅ Throwaway tests
331
- - ✅ When you want absolute simplicity
332
-
333
- **Use `train_organized.py` for:**
334
- - ✅ Complex team projects
335
- - ✅ When you need to heavily extend the code
336
- - ✅ Unit testing individual components
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
TRAINING_GUIDE.md DELETED
@@ -1,219 +0,0 @@
1
- # Production Training Script Guide
2
-
3
- ## Quick Start
4
-
5
- **Run with defaults (5-10 minute demo):**
6
- ```python
7
- hf_jobs("uv", {
8
- "script": "https://huggingface.co/evalstate/demo-training-scripts/resolve/main/train_production_documented.py",
9
- "flavor": "t4-small",
10
- "timeout": "20m",
11
- "secrets": {"HF_TOKEN": "$HF_TOKEN"}
12
- })
13
- ```
14
-
15
- ## Customizable Parameters
16
-
17
- All parameters can be customized via environment variables without modifying the script:
18
-
19
- ### 🎯 Most Common Settings
20
-
21
- | Parameter | Default | Description | When to Change |
22
- |-----------|---------|-------------|----------------|
23
- | `MODEL` | `Qwen/Qwen2.5-0.5B` | Model to fine-tune | Use larger model for better quality |
24
- | `DATASET` | `trl-lib/Capybara` | Training dataset | Use your own dataset |
25
- | `OUTPUT_REPO` | `evalstate/qwen-capybara-sft` | Where to save | Always set to your username |
26
- | `MAX_STEPS` | `20` | Training duration | 100+ for real training |
27
- | `LEARNING_RATE` | `2e-5` | Learning rate | Tune if loss unstable |
28
-
29
- ### 📊 Monitoring
30
-
31
- | Parameter | Default | Description |
32
- |-----------|---------|-------------|
33
- | `USE_TRACKIO` | `true` | Enable real-time monitoring | Set to `false` for faster demo |
34
-
35
- ### ⚙️ Advanced Settings
36
-
37
- | Parameter | Default | Description |
38
- |-----------|---------|-------------|
39
- | `BATCH_SIZE` | `2` | Batch size per GPU | Increase for larger GPUs |
40
- | `LEARNING_RATE` | `2e-5` | Learning rate | Typical range: 1e-5 to 5e-5 |
41
-
42
- ## Usage Examples
43
-
44
- ### Example 1: Quick Demo (Default)
45
- ```python
46
- hf_jobs("uv", {
47
- "script": "train_production_documented.py",
48
- "flavor": "t4-small",
49
- "timeout": "20m",
50
- "secrets": {"HF_TOKEN": "$HF_TOKEN"}
51
- })
52
- ```
53
-
54
- **Result:** Trains Qwen-0.5B for 20 steps (~10 minutes, ~$0.20)
55
-
56
- ### Example 2: Custom Model & Dataset
57
- ```python
58
- hf_jobs("uv", {
59
- "script": "train_production_documented.py",
60
- "flavor": "a10g-large",
61
- "timeout": "2h",
62
- "env": {
63
- "MODEL": "meta-llama/Llama-3.2-1B",
64
- "DATASET": "HuggingFaceH4/ultrachat_200k",
65
- "OUTPUT_REPO": "your-username/llama-ultrachat",
66
- "MAX_STEPS": "100"
67
- },
68
- "secrets": {"HF_TOKEN": "$HF_TOKEN"}
69
- })
70
- ```
71
-
72
- **Result:** Trains Llama-3.2-1B for 100 steps (~1 hour, ~$5)
73
-
74
- ### Example 3: Longer Training Run
75
- ```python
76
- hf_jobs("uv", {
77
- "script": "train_production_documented.py",
78
- "flavor": "a10g-large",
79
- "timeout": "6h",
80
- "env": {
81
- "MODEL": "Qwen/Qwen2.5-3B",
82
- "DATASET": "your-username/my-dataset",
83
- "OUTPUT_REPO": "your-username/qwen3b-custom",
84
- "MAX_STEPS": "500",
85
- "BATCH_SIZE": "4",
86
- "LEARNING_RATE": "1e-5"
87
- },
88
- "secrets": {"HF_TOKEN": "$HF_TOKEN"}
89
- })
90
- ```
91
-
92
- **Result:** Production training (~4-5 hours, ~$20-25)
93
-
94
- ### Example 4: Without Monitoring (Fastest)
95
- ```python
96
- hf_jobs("uv", {
97
- "script": "train_production_documented.py",
98
- "flavor": "t4-small",
99
- "timeout": "15m",
100
- "env": {
101
- "USE_TRACKIO": "false",
102
- "OUTPUT_REPO": "your-username/quick-test"
103
- },
104
- "secrets": {"HF_TOKEN": "$HF_TOKEN"}
105
- })
106
- ```
107
-
108
- **Result:** Fastest possible demo (~8 minutes)
109
-
110
- ## How It Works
111
-
112
- ### Three Configuration Levels
113
-
114
- ```
115
- ┌─────────────────────────────────────┐
116
- │ 1. CUSTOMIZABLE via env vars │ ← Change these freely
117
- │ MODEL, DATASET, MAX_STEPS, etc. │
118
- ├─────────────────────────────────────┤
119
- │ 2. FIXED (edit script if needed) │ ← Advanced users only
120
- │ LORA_R, GRADIENT_ACCUMULATION │
121
- ├─────────────────────────────────────┤
122
- │ 3. TRAINING LOGIC │ ← Don't modify
123
- │ Trainer initialization, etc. │
124
- └─────────────────────────────────────┘
125
- ```
126
-
127
- ### Single Trackio Space
128
-
129
- All experiments go to **one dashboard**: `evalstate/ml-experiments`
130
-
131
- - ✅ Easy comparison across experiments
132
- - ✅ Filtered by project name
133
- - ✅ No space clutter
134
-
135
- **Access your dashboard:**
136
- https://huggingface.co/spaces/evalstate/ml-experiments
137
-
138
- ## Recommended Models
139
-
140
- | Model | Size | Speed | Use Case | Hardware |
141
- |-------|------|-------|----------|----------|
142
- | `Qwen/Qwen2.5-0.5B` | 0.5B | ⚡⚡⚡ | Quick tests | t4-small |
143
- | `HuggingFaceTB/SmolLM2-1.7B` | 1.7B | ⚡⚡ | Development | t4-medium |
144
- | `meta-llama/Llama-3.2-1B` | 1B | ⚡⚡ | Production | a10g-small |
145
- | `Qwen/Qwen2.5-3B` | 3B | ⚡ | High quality | a10g-large |
146
-
147
- ## Recommended Datasets
148
-
149
- **Conversational (chat/instruction):**
150
- - `trl-lib/Capybara` - High-quality chat (16K examples)
151
- - `HuggingFaceH4/ultrachat_200k` - Diverse conversations
152
- - `argilla/distilabel-capybara-dpo-7k-binarized` - Preference data (for DPO)
153
-
154
- **Your own dataset:**
155
- - Must have `"messages"` field in conversational format
156
- - See: https://huggingface.co/docs/trl/dataset_formats
157
-
158
- ## Hardware Selection
159
-
160
- | Hardware | Cost/hr | When to Use |
161
- |----------|---------|-------------|
162
- | `t4-small` | ~$0.75 | Quick demos (20 steps) |
163
- | `t4-medium` | ~$1.50 | Small models, testing |
164
- | `a10g-small` | ~$3.50 | Production (1-3B models) |
165
- | `a10g-large` | ~$5.00 | Production (3-7B models) |
166
-
167
- ## Timeout Guidelines
168
-
169
- | Scenario | Recommended Timeout |
170
- |----------|-------------------|
171
- | Quick demo (20 steps) | 15-20m |
172
- | Development (100 steps) | 1-2h |
173
- | Production (500+ steps) | 4-6h |
174
-
175
- **Always add 20-30% buffer** for setup time!
176
-
177
- ## Troubleshooting
178
-
179
- ### "Out of memory"
180
- **Solution:** Reduce `BATCH_SIZE` or use larger GPU
181
- ```python
182
- "env": {"BATCH_SIZE": "1"} # Smallest possible
183
- ```
184
-
185
- ### "Dataset not found"
186
- **Check:** Dataset name is correct and public
187
- ```python
188
- "env": {"DATASET": "username/dataset-name"}
189
- ```
190
-
191
- ### "Permission denied pushing to Hub"
192
- **Check:** `OUTPUT_REPO` uses your username
193
- ```python
194
- "env": {"OUTPUT_REPO": "YOUR-USERNAME/model-name"}
195
- ```
196
-
197
- ### Training too slow/expensive
198
- **Solution:** Reduce steps or use smaller model
199
- ```python
200
- "env": {
201
- "MAX_STEPS": "10", # Faster
202
- "MODEL": "Qwen/Qwen2.5-0.5B" # Smaller
203
- }
204
- ```
205
-
206
- ## Next Steps
207
-
208
- After training completes:
209
-
210
- 1. **View your model:** https://huggingface.co/YOUR-USERNAME/model-name
211
- 2. **Check metrics:** https://huggingface.co/spaces/evalstate/ml-experiments
212
- 3. **Test the model:** Use the Inference API or download locally
213
- 4. **Share:** Model is public by default (or make private in settings)
214
-
215
- ## Questions?
216
-
217
- - 📖 Full TRL docs: https://huggingface.co/docs/trl
218
- - 💬 Ask in Hugging Face Discord
219
- - 🐛 Issues: Check job logs for error messages
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_inspector.py DELETED
@@ -1,416 +0,0 @@
1
- #!/usr/bin/env python3
2
- # /// script
3
- # dependencies = []
4
- # ///
5
- """
6
- Dataset Format Inspector for TRL Training (LLM-Optimized Output)
7
-
8
- Inspects Hugging Face datasets to determine TRL training compatibility.
9
- Uses Datasets Server API for instant results - no dataset download needed!
10
-
11
- ULTRA-EFFICIENT: Uses HF Datasets Server API - completes in <2 seconds.
12
-
13
- Usage with HF Jobs:
14
- hf_jobs("uv", {
15
- "script": "https://huggingface.co/datasets/evalstate/trl-helpers/raw/main/dataset_inspector.py",
16
- "script_args": ["--dataset", "your/dataset", "--split", "train"]
17
- })
18
- """
19
-
20
- import argparse
21
- import sys
22
- import json
23
- import urllib.request
24
- import urllib.parse
25
- from typing import List, Dict, Any
26
-
27
-
28
- def parse_args():
29
- parser = argparse.ArgumentParser(description="Inspect dataset format for TRL training")
30
- parser.add_argument("--dataset", type=str, required=True, help="Dataset name")
31
- parser.add_argument("--split", type=str, default="train", help="Dataset split (default: train)")
32
- parser.add_argument("--config", type=str, default="default", help="Dataset config name (default: default)")
33
- parser.add_argument("--preview", type=int, default=150, help="Max chars per field preview")
34
- parser.add_argument("--samples", type=int, default=5, help="Number of samples to fetch (default: 5)")
35
- parser.add_argument("--json-output", action="store_true", help="Output as JSON")
36
- return parser.parse_args()
37
-
38
-
39
- def api_request(url: str) -> Dict:
40
- """Make API request to Datasets Server"""
41
- try:
42
- with urllib.request.urlopen(url, timeout=10) as response:
43
- return json.loads(response.read().decode())
44
- except urllib.error.HTTPError as e:
45
- if e.code == 404:
46
- return None
47
- raise Exception(f"API request failed: {e.code} {e.reason}")
48
- except Exception as e:
49
- raise Exception(f"API request failed: {str(e)}")
50
-
51
-
52
- def get_splits(dataset: str) -> Dict:
53
- """Get available splits for dataset"""
54
- url = f"https://datasets-server.huggingface.co/splits?dataset={urllib.parse.quote(dataset)}"
55
- return api_request(url)
56
-
57
-
58
- def get_rows(dataset: str, config: str, split: str, offset: int = 0, length: int = 5) -> Dict:
59
- """Get rows from dataset"""
60
- url = f"https://datasets-server.huggingface.co/rows?dataset={urllib.parse.quote(dataset)}&config={config}&split={split}&offset={offset}&length={length}"
61
- return api_request(url)
62
-
63
-
64
- def find_columns(columns: List[str], patterns: List[str]) -> List[str]:
65
- """Find columns matching patterns"""
66
- return [c for c in columns if any(p in c.lower() for p in patterns)]
67
-
68
-
69
- def check_sft_compatibility(columns: List[str]) -> Dict[str, Any]:
70
- """Check SFT compatibility"""
71
- has_messages = "messages" in columns
72
- has_text = "text" in columns
73
- has_prompt_completion = "prompt" in columns and "completion" in columns
74
-
75
- ready = has_messages or has_text or has_prompt_completion
76
-
77
- possible_prompt = find_columns(columns, ["prompt", "instruction", "question", "input"])
78
- possible_response = find_columns(columns, ["response", "completion", "output", "answer"])
79
-
80
- return {
81
- "ready": ready,
82
- "reason": "messages" if has_messages else "text" if has_text else "prompt+completion" if has_prompt_completion else None,
83
- "possible_prompt": possible_prompt[0] if possible_prompt else None,
84
- "possible_response": possible_response[0] if possible_response else None,
85
- "has_context": "context" in columns,
86
- }
87
-
88
-
89
- def check_dpo_compatibility(columns: List[str]) -> Dict[str, Any]:
90
- """Check DPO compatibility"""
91
- has_standard = "prompt" in columns and "chosen" in columns and "rejected" in columns
92
-
93
- possible_prompt = find_columns(columns, ["prompt", "instruction", "question", "input"])
94
- possible_chosen = find_columns(columns, ["chosen", "preferred", "winner"])
95
- possible_rejected = find_columns(columns, ["rejected", "dispreferred", "loser"])
96
-
97
- can_map = bool(possible_prompt and possible_chosen and possible_rejected)
98
-
99
- return {
100
- "ready": has_standard,
101
- "can_map": can_map,
102
- "prompt_col": possible_prompt[0] if possible_prompt else None,
103
- "chosen_col": possible_chosen[0] if possible_chosen else None,
104
- "rejected_col": possible_rejected[0] if possible_rejected else None,
105
- }
106
-
107
-
108
- def check_grpo_compatibility(columns: List[str]) -> Dict[str, Any]:
109
- """Check GRPO compatibility"""
110
- has_prompt = "prompt" in columns
111
- has_no_responses = "chosen" not in columns and "rejected" not in columns
112
-
113
- possible_prompt = find_columns(columns, ["prompt", "instruction", "question", "input"])
114
-
115
- return {
116
- "ready": has_prompt and has_no_responses,
117
- "can_map": bool(possible_prompt) and has_no_responses,
118
- "prompt_col": possible_prompt[0] if possible_prompt else None,
119
- }
120
-
121
-
122
- def check_kto_compatibility(columns: List[str]) -> Dict[str, Any]:
123
- """Check KTO compatibility"""
124
- return {"ready": "prompt" in columns and "completion" in columns and "label" in columns}
125
-
126
-
127
- def generate_mapping_code(method: str, info: Dict[str, Any]) -> str:
128
- """Generate mapping code for a training method"""
129
- if method == "SFT":
130
- if info["ready"]:
131
- return None
132
-
133
- prompt_col = info.get("possible_prompt")
134
- response_col = info.get("possible_response")
135
- has_context = info.get("has_context", False)
136
-
137
- if not prompt_col:
138
- return None
139
-
140
- if has_context and response_col:
141
- return f"""def format_for_sft(example):
142
- text = f"Instruction: {{example['{prompt_col}']}}\\n\\n"
143
- if example.get('context'):
144
- text += f"Context: {{example['context']}}\\n\\n"
145
- text += f"Response: {{example['{response_col}']}}"
146
- return {{'text': text}}
147
-
148
- dataset = dataset.map(format_for_sft, remove_columns=dataset.column_names)"""
149
- elif response_col:
150
- return f"""def format_for_sft(example):
151
- return {{'text': f"{{example['{prompt_col}']}}\\n\\n{{example['{response_col}']}}}}
152
-
153
- dataset = dataset.map(format_for_sft, remove_columns=dataset.column_names)"""
154
- else:
155
- return f"""def format_for_sft(example):
156
- return {{'text': example['{prompt_col}']}}
157
-
158
- dataset = dataset.map(format_for_sft, remove_columns=dataset.column_names)"""
159
-
160
- elif method == "DPO":
161
- if info["ready"] or not info["can_map"]:
162
- return None
163
-
164
- return f"""def format_for_dpo(example):
165
- return {{
166
- 'prompt': example['{info['prompt_col']}'],
167
- 'chosen': example['{info['chosen_col']}'],
168
- 'rejected': example['{info['rejected_col']}'],
169
- }}
170
-
171
- dataset = dataset.map(format_for_dpo, remove_columns=dataset.column_names)"""
172
-
173
- elif method == "GRPO":
174
- if info["ready"] or not info["can_map"]:
175
- return None
176
-
177
- return f"""def format_for_grpo(example):
178
- return {{'prompt': example['{info['prompt_col']}']}}
179
-
180
- dataset = dataset.map(format_for_grpo, remove_columns=dataset.column_names)"""
181
-
182
- return None
183
-
184
-
185
- def format_value_preview(value: Any, max_chars: int) -> str:
186
- """Format value for preview"""
187
- if value is None:
188
- return "None"
189
- elif isinstance(value, str):
190
- return value[:max_chars] + ("..." if len(value) > max_chars else "")
191
- elif isinstance(value, list):
192
- if len(value) > 0 and isinstance(value[0], dict):
193
- return f"[{len(value)} items] Keys: {list(value[0].keys())}"
194
- preview = str(value)
195
- return preview[:max_chars] + ("..." if len(preview) > max_chars else "")
196
- else:
197
- preview = str(value)
198
- return preview[:max_chars] + ("..." if len(preview) > max_chars else "")
199
-
200
-
201
- def main():
202
- args = parse_args()
203
-
204
- print(f"Fetching dataset info via Datasets Server API...")
205
-
206
- try:
207
- # Get splits info
208
- splits_data = get_splits(args.dataset)
209
- if not splits_data or "splits" not in splits_data:
210
- print(f"ERROR: Could not fetch splits for dataset '{args.dataset}'")
211
- print(f" Dataset may not exist or is not accessible via Datasets Server API")
212
- sys.exit(1)
213
-
214
- # Find the right config
215
- available_configs = set()
216
- split_found = False
217
- config_to_use = args.config
218
-
219
- for split_info in splits_data["splits"]:
220
- available_configs.add(split_info["config"])
221
- if split_info["config"] == args.config and split_info["split"] == args.split:
222
- split_found = True
223
-
224
- # If default config not found, try first available
225
- if not split_found and available_configs:
226
- config_to_use = list(available_configs)[0]
227
- print(f"Config '{args.config}' not found, trying '{config_to_use}'...")
228
-
229
- # Get rows
230
- rows_data = get_rows(args.dataset, config_to_use, args.split, offset=0, length=args.samples)
231
-
232
- if not rows_data or "rows" not in rows_data:
233
- print(f"ERROR: Could not fetch rows for dataset '{args.dataset}'")
234
- print(f" Split '{args.split}' may not exist")
235
- print(f" Available configs: {', '.join(sorted(available_configs))}")
236
- sys.exit(1)
237
-
238
- rows = rows_data["rows"]
239
- if not rows:
240
- print(f"ERROR: No rows found in split '{args.split}'")
241
- sys.exit(1)
242
-
243
- # Extract column info from first row
244
- first_row = rows[0]["row"]
245
- columns = list(first_row.keys())
246
- features = rows_data.get("features", [])
247
-
248
- # Get total count if available
249
- total_examples = "Unknown"
250
- for split_info in splits_data["splits"]:
251
- if split_info["config"] == config_to_use and split_info["split"] == args.split:
252
- total_examples = f"{split_info.get('num_examples', 'Unknown'):,}" if isinstance(split_info.get('num_examples'), int) else "Unknown"
253
- break
254
-
255
- except Exception as e:
256
- print(f"ERROR: {str(e)}")
257
- sys.exit(1)
258
-
259
- # Run compatibility checks
260
- sft_info = check_sft_compatibility(columns)
261
- dpo_info = check_dpo_compatibility(columns)
262
- grpo_info = check_grpo_compatibility(columns)
263
- kto_info = check_kto_compatibility(columns)
264
-
265
- # Determine recommended methods
266
- recommended = []
267
- if sft_info["ready"]:
268
- recommended.append("SFT")
269
- elif sft_info["possible_prompt"]:
270
- recommended.append("SFT (needs mapping)")
271
-
272
- if dpo_info["ready"]:
273
- recommended.append("DPO")
274
- elif dpo_info["can_map"]:
275
- recommended.append("DPO (needs mapping)")
276
-
277
- if grpo_info["ready"]:
278
- recommended.append("GRPO")
279
- elif grpo_info["can_map"]:
280
- recommended.append("GRPO (needs mapping)")
281
-
282
- if kto_info["ready"]:
283
- recommended.append("KTO")
284
-
285
- # JSON output mode
286
- if args.json_output:
287
- result = {
288
- "dataset": args.dataset,
289
- "config": config_to_use,
290
- "split": args.split,
291
- "total_examples": total_examples,
292
- "columns": columns,
293
- "features": [{"name": f["name"], "type": f["type"]} for f in features] if features else [],
294
- "compatibility": {
295
- "SFT": sft_info,
296
- "DPO": dpo_info,
297
- "GRPO": grpo_info,
298
- "KTO": kto_info,
299
- },
300
- "recommended_methods": recommended,
301
- }
302
- print(json.dumps(result, indent=2))
303
- sys.exit(0)
304
-
305
- # Human-readable output optimized for LLM parsing
306
- print("=" * 80)
307
- print(f"DATASET INSPECTION RESULTS")
308
- print("=" * 80)
309
-
310
- print(f"\nDataset: {args.dataset}")
311
- print(f"Config: {config_to_use}")
312
- print(f"Split: {args.split}")
313
- print(f"Total examples: {total_examples}")
314
- print(f"Samples fetched: {len(rows)}")
315
-
316
- print(f"\n{'COLUMNS':-<80}")
317
- if features:
318
- for feature in features:
319
- print(f" {feature['name']}: {feature['type']}")
320
- else:
321
- for col in columns:
322
- print(f" {col}: (type info not available)")
323
-
324
- print(f"\n{'EXAMPLE DATA':-<80}")
325
- example = first_row
326
- for col in columns:
327
- value = example.get(col)
328
- display = format_value_preview(value, args.preview)
329
- print(f"\n{col}:")
330
- print(f" {display}")
331
-
332
- print(f"\n{'TRAINING METHOD COMPATIBILITY':-<80}")
333
-
334
- # SFT
335
- print(f"\n[SFT] {'✓ READY' if sft_info['ready'] else '✗ NEEDS MAPPING'}")
336
- if sft_info["ready"]:
337
- print(f" Reason: Dataset has '{sft_info['reason']}' field")
338
- print(f" Action: Use directly with SFTTrainer")
339
- elif sft_info["possible_prompt"]:
340
- print(f" Detected: prompt='{sft_info['possible_prompt']}' response='{sft_info['possible_response']}'")
341
- print(f" Action: Apply mapping code (see below)")
342
- else:
343
- print(f" Status: Cannot determine mapping - manual inspection needed")
344
-
345
- # DPO
346
- print(f"\n[DPO] {'✓ READY' if dpo_info['ready'] else '✗ NEEDS MAPPING' if dpo_info['can_map'] else '✗ INCOMPATIBLE'}")
347
- if dpo_info["ready"]:
348
- print(f" Reason: Dataset has 'prompt', 'chosen', 'rejected' fields")
349
- print(f" Action: Use directly with DPOTrainer")
350
- elif dpo_info["can_map"]:
351
- print(f" Detected: prompt='{dpo_info['prompt_col']}' chosen='{dpo_info['chosen_col']}' rejected='{dpo_info['rejected_col']}'")
352
- print(f" Action: Apply mapping code (see below)")
353
- else:
354
- print(f" Status: Missing required fields (prompt + chosen + rejected)")
355
-
356
- # GRPO
357
- print(f"\n[GRPO] {'✓ READY' if grpo_info['ready'] else '✗ NEEDS MAPPING' if grpo_info['can_map'] else '✗ INCOMPATIBLE'}")
358
- if grpo_info["ready"]:
359
- print(f" Reason: Dataset has 'prompt' field")
360
- print(f" Action: Use directly with GRPOTrainer")
361
- elif grpo_info["can_map"]:
362
- print(f" Detected: prompt='{grpo_info['prompt_col']}'")
363
- print(f" Action: Apply mapping code (see below)")
364
- else:
365
- print(f" Status: Missing prompt field")
366
-
367
- # KTO
368
- print(f"\n[KTO] {'✓ READY' if kto_info['ready'] else '✗ INCOMPATIBLE'}")
369
- if kto_info["ready"]:
370
- print(f" Reason: Dataset has 'prompt', 'completion', 'label' fields")
371
- print(f" Action: Use directly with KTOTrainer")
372
- else:
373
- print(f" Status: Missing required fields (prompt + completion + label)")
374
-
375
- # Mapping code
376
- print(f"\n{'MAPPING CODE (if needed)':-<80}")
377
-
378
- mapping_needed = False
379
-
380
- sft_mapping = generate_mapping_code("SFT", sft_info)
381
- if sft_mapping:
382
- print(f"\n# For SFT Training:")
383
- print(sft_mapping)
384
- mapping_needed = True
385
-
386
- dpo_mapping = generate_mapping_code("DPO", dpo_info)
387
- if dpo_mapping:
388
- print(f"\n# For DPO Training:")
389
- print(dpo_mapping)
390
- mapping_needed = True
391
-
392
- grpo_mapping = generate_mapping_code("GRPO", grpo_info)
393
- if grpo_mapping:
394
- print(f"\n# For GRPO Training:")
395
- print(grpo_mapping)
396
- mapping_needed = True
397
-
398
- if not mapping_needed:
399
- print("\nNo mapping needed - dataset is ready for training!")
400
-
401
- print(f"\n{'SUMMARY':-<80}")
402
- print(f"Recommended training methods: {', '.join(recommended) if recommended else 'None (dataset needs formatting)'}")
403
- print(f"\nNote: Used Datasets Server API (instant, no download required)")
404
-
405
- print("\n" + "=" * 80)
406
- sys.exit(0)
407
-
408
-
409
- if __name__ == "__main__":
410
- try:
411
- main()
412
- except KeyboardInterrupt:
413
- sys.exit(0)
414
- except Exception as e:
415
- print(f"ERROR: {e}", file=sys.stderr)
416
- sys.exit(1)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
demo_train.py DELETED
@@ -1,84 +0,0 @@
1
- # /// script
2
- # dependencies = [
3
- # "trl>=0.12.0",
4
- # "peft>=0.7.0",
5
- # "transformers>=4.36.0",
6
- # "accelerate>=0.24.0",
7
- # "trackio",
8
- # ]
9
- # ///
10
-
11
- import trackio
12
- from datasets import load_dataset
13
- from peft import LoraConfig
14
- from trl import SFTTrainer, SFTConfig
15
-
16
- # Initialize Trackio for real-time monitoring
17
- trackio.init(
18
- project="qwen-demo-sft",
19
- space_id="evalstate/demo-trackio-dashboard",
20
- config={
21
- "model": "Qwen/Qwen2.5-0.5B",
22
- "dataset": "trl-lib/Capybara",
23
- "examples": 50,
24
- "max_steps": 20,
25
- "note": "Quick demo training"
26
- }
27
- )
28
-
29
- # Load dataset (only 50 examples for quick demo)
30
- dataset = load_dataset("trl-lib/Capybara", split="train[:50]")
31
- print(f"✅ Dataset loaded: {len(dataset)} examples")
32
-
33
- # Training configuration
34
- config = SFTConfig(
35
- # Hub settings - CRITICAL for saving results
36
- output_dir="qwen-demo-sft",
37
- push_to_hub=True,
38
- hub_model_id="evalstate/qwen-demo-sft",
39
-
40
- # Quick training settings
41
- max_steps=20, # Very short for demo
42
- per_device_train_batch_size=2,
43
- gradient_accumulation_steps=2,
44
- learning_rate=2e-5,
45
-
46
- # Logging
47
- logging_steps=5,
48
- save_strategy="steps",
49
- save_steps=10,
50
-
51
- # Monitoring
52
- report_to="trackio",
53
- )
54
-
55
- # LoRA configuration (memory efficient)
56
- peft_config = LoraConfig(
57
- r=16,
58
- lora_alpha=32,
59
- lora_dropout=0.05,
60
- bias="none",
61
- task_type="CAUSAL_LM",
62
- target_modules=["q_proj", "v_proj"],
63
- )
64
-
65
- # Initialize and train
66
- trainer = SFTTrainer(
67
- model="Qwen/Qwen2.5-0.5B",
68
- train_dataset=dataset,
69
- args=config,
70
- peft_config=peft_config,
71
- )
72
-
73
- print("🚀 Starting demo training...")
74
- trainer.train()
75
-
76
- print("💾 Pushing to Hub...")
77
- trainer.push_to_hub()
78
-
79
- # Finish Trackio tracking
80
- trackio.finish()
81
-
82
- print("✅ Demo complete!")
83
- print(f"📦 Model: https://huggingface.co/evalstate/qwen-demo-sft")
84
- print(f"📊 Metrics: https://huggingface.co/spaces/evalstate/demo-trackio-dashboard")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
skill-creator/LICENSE.txt DELETED
@@ -1,202 +0,0 @@
1
-
2
- Apache License
3
- Version 2.0, January 2004
4
- http://www.apache.org/licenses/
5
-
6
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
7
-
8
- 1. Definitions.
9
-
10
- "License" shall mean the terms and conditions for use, reproduction,
11
- and distribution as defined by Sections 1 through 9 of this document.
12
-
13
- "Licensor" shall mean the copyright owner or entity authorized by
14
- the copyright owner that is granting the License.
15
-
16
- "Legal Entity" shall mean the union of the acting entity and all
17
- other entities that control, are controlled by, or are under common
18
- control with that entity. For the purposes of this definition,
19
- "control" means (i) the power, direct or indirect, to cause the
20
- direction or management of such entity, whether by contract or
21
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
22
- outstanding shares, or (iii) beneficial ownership of such entity.
23
-
24
- "You" (or "Your") shall mean an individual or Legal Entity
25
- exercising permissions granted by this License.
26
-
27
- "Source" form shall mean the preferred form for making modifications,
28
- including but not limited to software source code, documentation
29
- source, and configuration files.
30
-
31
- "Object" form shall mean any form resulting from mechanical
32
- transformation or translation of a Source form, including but
33
- not limited to compiled object code, generated documentation,
34
- and conversions to other media types.
35
-
36
- "Work" shall mean the work of authorship, whether in Source or
37
- Object form, made available under the License, as indicated by a
38
- copyright notice that is included in or attached to the work
39
- (an example is provided in the Appendix below).
40
-
41
- "Derivative Works" shall mean any work, whether in Source or Object
42
- form, that is based on (or derived from) the Work and for which the
43
- editorial revisions, annotations, elaborations, or other modifications
44
- represent, as a whole, an original work of authorship. For the purposes
45
- of this License, Derivative Works shall not include works that remain
46
- separable from, or merely link (or bind by name) to the interfaces of,
47
- the Work and Derivative Works thereof.
48
-
49
- "Contribution" shall mean any work of authorship, including
50
- the original version of the Work and any modifications or additions
51
- to that Work or Derivative Works thereof, that is intentionally
52
- submitted to Licensor for inclusion in the Work by the copyright owner
53
- or by an individual or Legal Entity authorized to submit on behalf of
54
- the copyright owner. For the purposes of this definition, "submitted"
55
- means any form of electronic, verbal, or written communication sent
56
- to the Licensor or its representatives, including but not limited to
57
- communication on electronic mailing lists, source code control systems,
58
- and issue tracking systems that are managed by, or on behalf of, the
59
- Licensor for the purpose of discussing and improving the Work, but
60
- excluding communication that is conspicuously marked or otherwise
61
- designated in writing by the copyright owner as "Not a Contribution."
62
-
63
- "Contributor" shall mean Licensor and any individual or Legal Entity
64
- on behalf of whom a Contribution has been received by Licensor and
65
- subsequently incorporated within the Work.
66
-
67
- 2. Grant of Copyright License. Subject to the terms and conditions of
68
- this License, each Contributor hereby grants to You a perpetual,
69
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
70
- copyright license to reproduce, prepare Derivative Works of,
71
- publicly display, publicly perform, sublicense, and distribute the
72
- Work and such Derivative Works in Source or Object form.
73
-
74
- 3. Grant of Patent License. Subject to the terms and conditions of
75
- this License, each Contributor hereby grants to You a perpetual,
76
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
77
- (except as stated in this section) patent license to make, have made,
78
- use, offer to sell, sell, import, and otherwise transfer the Work,
79
- where such license applies only to those patent claims licensable
80
- by such Contributor that are necessarily infringed by their
81
- Contribution(s) alone or by combination of their Contribution(s)
82
- with the Work to which such Contribution(s) was submitted. If You
83
- institute patent litigation against any entity (including a
84
- cross-claim or counterclaim in a lawsuit) alleging that the Work
85
- or a Contribution incorporated within the Work constitutes direct
86
- or contributory patent infringement, then any patent licenses
87
- granted to You under this License for that Work shall terminate
88
- as of the date such litigation is filed.
89
-
90
- 4. Redistribution. You may reproduce and distribute copies of the
91
- Work or Derivative Works thereof in any medium, with or without
92
- modifications, and in Source or Object form, provided that You
93
- meet the following conditions:
94
-
95
- (a) You must give any other recipients of the Work or
96
- Derivative Works a copy of this License; and
97
-
98
- (b) You must cause any modified files to carry prominent notices
99
- stating that You changed the files; and
100
-
101
- (c) You must retain, in the Source form of any Derivative Works
102
- that You distribute, all copyright, patent, trademark, and
103
- attribution notices from the Source form of the Work,
104
- excluding those notices that do not pertain to any part of
105
- the Derivative Works; and
106
-
107
- (d) If the Work includes a "NOTICE" text file as part of its
108
- distribution, then any Derivative Works that You distribute must
109
- include a readable copy of the attribution notices contained
110
- within such NOTICE file, excluding those notices that do not
111
- pertain to any part of the Derivative Works, in at least one
112
- of the following places: within a NOTICE text file distributed
113
- as part of the Derivative Works; within the Source form or
114
- documentation, if provided along with the Derivative Works; or,
115
- within a display generated by the Derivative Works, if and
116
- wherever such third-party notices normally appear. The contents
117
- of the NOTICE file are for informational purposes only and
118
- do not modify the License. You may add Your own attribution
119
- notices within Derivative Works that You distribute, alongside
120
- or as an addendum to the NOTICE text from the Work, provided
121
- that such additional attribution notices cannot be construed
122
- as modifying the License.
123
-
124
- You may add Your own copyright statement to Your modifications and
125
- may provide additional or different license terms and conditions
126
- for use, reproduction, or distribution of Your modifications, or
127
- for any such Derivative Works as a whole, provided Your use,
128
- reproduction, and distribution of the Work otherwise complies with
129
- the conditions stated in this License.
130
-
131
- 5. Submission of Contributions. Unless You explicitly state otherwise,
132
- any Contribution intentionally submitted for inclusion in the Work
133
- by You to the Licensor shall be under the terms and conditions of
134
- this License, without any additional terms or conditions.
135
- Notwithstanding the above, nothing herein shall supersede or modify
136
- the terms of any separate license agreement you may have executed
137
- with Licensor regarding such Contributions.
138
-
139
- 6. Trademarks. This License does not grant permission to use the trade
140
- names, trademarks, service marks, or product names of the Licensor,
141
- except as required for reasonable and customary use in describing the
142
- origin of the Work and reproducing the content of the NOTICE file.
143
-
144
- 7. Disclaimer of Warranty. Unless required by applicable law or
145
- agreed to in writing, Licensor provides the Work (and each
146
- Contributor provides its Contributions) on an "AS IS" BASIS,
147
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
148
- implied, including, without limitation, any warranties or conditions
149
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
150
- PARTICULAR PURPOSE. You are solely responsible for determining the
151
- appropriateness of using or redistributing the Work and assume any
152
- risks associated with Your exercise of permissions under this License.
153
-
154
- 8. Limitation of Liability. In no event and under no legal theory,
155
- whether in tort (including negligence), contract, or otherwise,
156
- unless required by applicable law (such as deliberate and grossly
157
- negligent acts) or agreed to in writing, shall any Contributor be
158
- liable to You for damages, including any direct, indirect, special,
159
- incidental, or consequential damages of any character arising as a
160
- result of this License or out of the use or inability to use the
161
- Work (including but not limited to damages for loss of goodwill,
162
- work stoppage, computer failure or malfunction, or any and all
163
- other commercial damages or losses), even if such Contributor
164
- has been advised of the possibility of such damages.
165
-
166
- 9. Accepting Warranty or Additional Liability. While redistributing
167
- the Work or Derivative Works thereof, You may choose to offer,
168
- and charge a fee for, acceptance of support, warranty, indemnity,
169
- or other liability obligations and/or rights consistent with this
170
- License. However, in accepting such obligations, You may act only
171
- on Your own behalf and on Your sole responsibility, not on behalf
172
- of any other Contributor, and only if You agree to indemnify,
173
- defend, and hold each Contributor harmless for any liability
174
- incurred by, or claims asserted against, such Contributor by reason
175
- of your accepting any such warranty or additional liability.
176
-
177
- END OF TERMS AND CONDITIONS
178
-
179
- APPENDIX: How to apply the Apache License to your work.
180
-
181
- To apply the Apache License to your work, attach the following
182
- boilerplate notice, with the fields enclosed by brackets "[]"
183
- replaced with your own identifying information. (Don't include
184
- the brackets!) The text should be enclosed in the appropriate
185
- comment syntax for the file format. We also recommend that a
186
- file or class name and description of purpose be included on the
187
- same "printed page" as the copyright notice for easier
188
- identification within third-party archives.
189
-
190
- Copyright [yyyy] [name of copyright owner]
191
-
192
- Licensed under the Apache License, Version 2.0 (the "License");
193
- you may not use this file except in compliance with the License.
194
- You may obtain a copy of the License at
195
-
196
- http://www.apache.org/licenses/LICENSE-2.0
197
-
198
- Unless required by applicable law or agreed to in writing, software
199
- distributed under the License is distributed on an "AS IS" BASIS,
200
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
201
- See the License for the specific language governing permissions and
202
- limitations under the License.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
skill-creator/SKILL.md DELETED
@@ -1,209 +0,0 @@
1
- ---
2
- name: skill-creator
3
- description: Guide for creating effective skills. This skill should be used when users want to create a new skill (or update an existing skill) that extends Claude's capabilities with specialized knowledge, workflows, or tool integrations.
4
- license: Complete terms in LICENSE.txt
5
- ---
6
-
7
- # Skill Creator
8
-
9
- This skill provides guidance for creating effective skills.
10
-
11
- ## About Skills
12
-
13
- Skills are modular, self-contained packages that extend Claude's capabilities by providing
14
- specialized knowledge, workflows, and tools. Think of them as "onboarding guides" for specific
15
- domains or tasks—they transform Claude from a general-purpose agent into a specialized agent
16
- equipped with procedural knowledge that no model can fully possess.
17
-
18
- ### What Skills Provide
19
-
20
- 1. Specialized workflows - Multi-step procedures for specific domains
21
- 2. Tool integrations - Instructions for working with specific file formats or APIs
22
- 3. Domain expertise - Company-specific knowledge, schemas, business logic
23
- 4. Bundled resources - Scripts, references, and assets for complex and repetitive tasks
24
-
25
- ### Anatomy of a Skill
26
-
27
- Every skill consists of a required SKILL.md file and optional bundled resources:
28
-
29
- ```
30
- skill-name/
31
- ├── SKILL.md (required)
32
- │ ├── YAML frontmatter metadata (required)
33
- │ │ ├── name: (required)
34
- │ │ └── description: (required)
35
- │ └── Markdown instructions (required)
36
- └── Bundled Resources (optional)
37
- ├── scripts/ - Executable code (Python/Bash/etc.)
38
- ├── references/ - Documentation intended to be loaded into context as needed
39
- └── assets/ - Files used in output (templates, icons, fonts, etc.)
40
- ```
41
-
42
- #### SKILL.md (required)
43
-
44
- **Metadata Quality:** The `name` and `description` in YAML frontmatter determine when Claude will use the skill. Be specific about what the skill does and when to use it. Use the third-person (e.g. "This skill should be used when..." instead of "Use this skill when...").
45
-
46
- #### Bundled Resources (optional)
47
-
48
- ##### Scripts (`scripts/`)
49
-
50
- Executable code (Python/Bash/etc.) for tasks that require deterministic reliability or are repeatedly rewritten.
51
-
52
- - **When to include**: When the same code is being rewritten repeatedly or deterministic reliability is needed
53
- - **Example**: `scripts/rotate_pdf.py` for PDF rotation tasks
54
- - **Benefits**: Token efficient, deterministic, may be executed without loading into context
55
- - **Note**: Scripts may still need to be read by Claude for patching or environment-specific adjustments
56
-
57
- ##### References (`references/`)
58
-
59
- Documentation and reference material intended to be loaded as needed into context to inform Claude's process and thinking.
60
-
61
- - **When to include**: For documentation that Claude should reference while working
62
- - **Examples**: `references/finance.md` for financial schemas, `references/mnda.md` for company NDA template, `references/policies.md` for company policies, `references/api_docs.md` for API specifications
63
- - **Use cases**: Database schemas, API documentation, domain knowledge, company policies, detailed workflow guides
64
- - **Benefits**: Keeps SKILL.md lean, loaded only when Claude determines it's needed
65
- - **Best practice**: If files are large (>10k words), include grep search patterns in SKILL.md
66
- - **Avoid duplication**: Information should live in either SKILL.md or references files, not both. Prefer references files for detailed information unless it's truly core to the skill—this keeps SKILL.md lean while making information discoverable without hogging the context window. Keep only essential procedural instructions and workflow guidance in SKILL.md; move detailed reference material, schemas, and examples to references files.
67
-
68
- ##### Assets (`assets/`)
69
-
70
- Files not intended to be loaded into context, but rather used within the output Claude produces.
71
-
72
- - **When to include**: When the skill needs files that will be used in the final output
73
- - **Examples**: `assets/logo.png` for brand assets, `assets/slides.pptx` for PowerPoint templates, `assets/frontend-template/` for HTML/React boilerplate, `assets/font.ttf` for typography
74
- - **Use cases**: Templates, images, icons, boilerplate code, fonts, sample documents that get copied or modified
75
- - **Benefits**: Separates output resources from documentation, enables Claude to use files without loading them into context
76
-
77
- ### Progressive Disclosure Design Principle
78
-
79
- Skills use a three-level loading system to manage context efficiently:
80
-
81
- 1. **Metadata (name + description)** - Always in context (~100 words)
82
- 2. **SKILL.md body** - When skill triggers (<5k words)
83
- 3. **Bundled resources** - As needed by Claude (Unlimited*)
84
-
85
- *Unlimited because scripts can be executed without reading into context window.
86
-
87
- ## Skill Creation Process
88
-
89
- To create a skill, follow the "Skill Creation Process" in order, skipping steps only if there is a clear reason why they are not applicable.
90
-
91
- ### Step 1: Understanding the Skill with Concrete Examples
92
-
93
- Skip this step only when the skill's usage patterns are already clearly understood. It remains valuable even when working with an existing skill.
94
-
95
- To create an effective skill, clearly understand concrete examples of how the skill will be used. This understanding can come from either direct user examples or generated examples that are validated with user feedback.
96
-
97
- For example, when building an image-editor skill, relevant questions include:
98
-
99
- - "What functionality should the image-editor skill support? Editing, rotating, anything else?"
100
- - "Can you give some examples of how this skill would be used?"
101
- - "I can imagine users asking for things like 'Remove the red-eye from this image' or 'Rotate this image'. Are there other ways you imagine this skill being used?"
102
- - "What would a user say that should trigger this skill?"
103
-
104
- To avoid overwhelming users, avoid asking too many questions in a single message. Start with the most important questions and follow up as needed for better effectiveness.
105
-
106
- Conclude this step when there is a clear sense of the functionality the skill should support.
107
-
108
- ### Step 2: Planning the Reusable Skill Contents
109
-
110
- To turn concrete examples into an effective skill, analyze each example by:
111
-
112
- 1. Considering how to execute on the example from scratch
113
- 2. Identifying what scripts, references, and assets would be helpful when executing these workflows repeatedly
114
-
115
- Example: When building a `pdf-editor` skill to handle queries like "Help me rotate this PDF," the analysis shows:
116
-
117
- 1. Rotating a PDF requires re-writing the same code each time
118
- 2. A `scripts/rotate_pdf.py` script would be helpful to store in the skill
119
-
120
- Example: When designing a `frontend-webapp-builder` skill for queries like "Build me a todo app" or "Build me a dashboard to track my steps," the analysis shows:
121
-
122
- 1. Writing a frontend webapp requires the same boilerplate HTML/React each time
123
- 2. An `assets/hello-world/` template containing the boilerplate HTML/React project files would be helpful to store in the skill
124
-
125
- Example: When building a `big-query` skill to handle queries like "How many users have logged in today?" the analysis shows:
126
-
127
- 1. Querying BigQuery requires re-discovering the table schemas and relationships each time
128
- 2. A `references/schema.md` file documenting the table schemas would be helpful to store in the skill
129
-
130
- To establish the skill's contents, analyze each concrete example to create a list of the reusable resources to include: scripts, references, and assets.
131
-
132
- ### Step 3: Initializing the Skill
133
-
134
- At this point, it is time to actually create the skill.
135
-
136
- Skip this step only if the skill being developed already exists, and iteration or packaging is needed. In this case, continue to the next step.
137
-
138
- When creating a new skill from scratch, always run the `init_skill.py` script. The script conveniently generates a new template skill directory that automatically includes everything a skill requires, making the skill creation process much more efficient and reliable.
139
-
140
- Usage:
141
-
142
- ```bash
143
- scripts/init_skill.py <skill-name> --path <output-directory>
144
- ```
145
-
146
- The script:
147
-
148
- - Creates the skill directory at the specified path
149
- - Generates a SKILL.md template with proper frontmatter and TODO placeholders
150
- - Creates example resource directories: `scripts/`, `references/`, and `assets/`
151
- - Adds example files in each directory that can be customized or deleted
152
-
153
- After initialization, customize or remove the generated SKILL.md and example files as needed.
154
-
155
- ### Step 4: Edit the Skill
156
-
157
- When editing the (newly-generated or existing) skill, remember that the skill is being created for another instance of Claude to use. Focus on including information that would be beneficial and non-obvious to Claude. Consider what procedural knowledge, domain-specific details, or reusable assets would help another Claude instance execute these tasks more effectively.
158
-
159
- #### Start with Reusable Skill Contents
160
-
161
- To begin implementation, start with the reusable resources identified above: `scripts/`, `references/`, and `assets/` files. Note that this step may require user input. For example, when implementing a `brand-guidelines` skill, the user may need to provide brand assets or templates to store in `assets/`, or documentation to store in `references/`.
162
-
163
- Also, delete any example files and directories not needed for the skill. The initialization script creates example files in `scripts/`, `references/`, and `assets/` to demonstrate structure, but most skills won't need all of them.
164
-
165
- #### Update SKILL.md
166
-
167
- **Writing Style:** Write the entire skill using **imperative/infinitive form** (verb-first instructions), not second person. Use objective, instructional language (e.g., "To accomplish X, do Y" rather than "You should do X" or "If you need to do X"). This maintains consistency and clarity for AI consumption.
168
-
169
- To complete SKILL.md, answer the following questions:
170
-
171
- 1. What is the purpose of the skill, in a few sentences?
172
- 2. When should the skill be used?
173
- 3. In practice, how should Claude use the skill? All reusable skill contents developed above should be referenced so that Claude knows how to use them.
174
-
175
- ### Step 5: Packaging a Skill
176
-
177
- Once the skill is ready, it should be packaged into a distributable zip file that gets shared with the user. The packaging process automatically validates the skill first to ensure it meets all requirements:
178
-
179
- ```bash
180
- scripts/package_skill.py <path/to/skill-folder>
181
- ```
182
-
183
- Optional output directory specification:
184
-
185
- ```bash
186
- scripts/package_skill.py <path/to/skill-folder> ./dist
187
- ```
188
-
189
- The packaging script will:
190
-
191
- 1. **Validate** the skill automatically, checking:
192
- - YAML frontmatter format and required fields
193
- - Skill naming conventions and directory structure
194
- - Description completeness and quality
195
- - File organization and resource references
196
-
197
- 2. **Package** the skill if validation passes, creating a zip file named after the skill (e.g., `my-skill.zip`) that includes all files and maintains the proper directory structure for distribution.
198
-
199
- If validation fails, the script will report the errors and exit without creating a package. Fix any validation errors and run the packaging command again.
200
-
201
- ### Step 6: Iterate
202
-
203
- After testing the skill, users may request improvements. Often this happens right after using the skill, with fresh context of how the skill performed.
204
-
205
- **Iteration workflow:**
206
- 1. Use the skill on real tasks
207
- 2. Notice struggles or inefficiencies
208
- 3. Identify how SKILL.md or bundled resources should be updated
209
- 4. Implement changes and test again
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
skill-creator/scripts/__pycache__/quick_validate.cpython-313.pyc DELETED
Binary file (2.7 kB)
 
skill-creator/scripts/init_skill.py DELETED
@@ -1,303 +0,0 @@
1
- #!/usr/bin/env python3
2
- """
3
- Skill Initializer - Creates a new skill from template
4
-
5
- Usage:
6
- init_skill.py <skill-name> --path <path>
7
-
8
- Examples:
9
- init_skill.py my-new-skill --path skills/public
10
- init_skill.py my-api-helper --path skills/private
11
- init_skill.py custom-skill --path /custom/location
12
- """
13
-
14
- import sys
15
- from pathlib import Path
16
-
17
-
18
- SKILL_TEMPLATE = """---
19
- name: {skill_name}
20
- description: [TODO: Complete and informative explanation of what the skill does and when to use it. Include WHEN to use this skill - specific scenarios, file types, or tasks that trigger it.]
21
- ---
22
-
23
- # {skill_title}
24
-
25
- ## Overview
26
-
27
- [TODO: 1-2 sentences explaining what this skill enables]
28
-
29
- ## Structuring This Skill
30
-
31
- [TODO: Choose the structure that best fits this skill's purpose. Common patterns:
32
-
33
- **1. Workflow-Based** (best for sequential processes)
34
- - Works well when there are clear step-by-step procedures
35
- - Example: DOCX skill with "Workflow Decision Tree" → "Reading" → "Creating" → "Editing"
36
- - Structure: ## Overview → ## Workflow Decision Tree → ## Step 1 → ## Step 2...
37
-
38
- **2. Task-Based** (best for tool collections)
39
- - Works well when the skill offers different operations/capabilities
40
- - Example: PDF skill with "Quick Start" → "Merge PDFs" → "Split PDFs" → "Extract Text"
41
- - Structure: ## Overview → ## Quick Start → ## Task Category 1 → ## Task Category 2...
42
-
43
- **3. Reference/Guidelines** (best for standards or specifications)
44
- - Works well for brand guidelines, coding standards, or requirements
45
- - Example: Brand styling with "Brand Guidelines" → "Colors" → "Typography" → "Features"
46
- - Structure: ## Overview → ## Guidelines → ## Specifications → ## Usage...
47
-
48
- **4. Capabilities-Based** (best for integrated systems)
49
- - Works well when the skill provides multiple interrelated features
50
- - Example: Product Management with "Core Capabilities" → numbered capability list
51
- - Structure: ## Overview → ## Core Capabilities → ### 1. Feature → ### 2. Feature...
52
-
53
- Patterns can be mixed and matched as needed. Most skills combine patterns (e.g., start with task-based, add workflow for complex operations).
54
-
55
- Delete this entire "Structuring This Skill" section when done - it's just guidance.]
56
-
57
- ## [TODO: Replace with the first main section based on chosen structure]
58
-
59
- [TODO: Add content here. See examples in existing skills:
60
- - Code samples for technical skills
61
- - Decision trees for complex workflows
62
- - Concrete examples with realistic user requests
63
- - References to scripts/templates/references as needed]
64
-
65
- ## Resources
66
-
67
- This skill includes example resource directories that demonstrate how to organize different types of bundled resources:
68
-
69
- ### scripts/
70
- Executable code (Python/Bash/etc.) that can be run directly to perform specific operations.
71
-
72
- **Examples from other skills:**
73
- - PDF skill: `fill_fillable_fields.py`, `extract_form_field_info.py` - utilities for PDF manipulation
74
- - DOCX skill: `document.py`, `utilities.py` - Python modules for document processing
75
-
76
- **Appropriate for:** Python scripts, shell scripts, or any executable code that performs automation, data processing, or specific operations.
77
-
78
- **Note:** Scripts may be executed without loading into context, but can still be read by Claude for patching or environment adjustments.
79
-
80
- ### references/
81
- Documentation and reference material intended to be loaded into context to inform Claude's process and thinking.
82
-
83
- **Examples from other skills:**
84
- - Product management: `communication.md`, `context_building.md` - detailed workflow guides
85
- - BigQuery: API reference documentation and query examples
86
- - Finance: Schema documentation, company policies
87
-
88
- **Appropriate for:** In-depth documentation, API references, database schemas, comprehensive guides, or any detailed information that Claude should reference while working.
89
-
90
- ### assets/
91
- Files not intended to be loaded into context, but rather used within the output Claude produces.
92
-
93
- **Examples from other skills:**
94
- - Brand styling: PowerPoint template files (.pptx), logo files
95
- - Frontend builder: HTML/React boilerplate project directories
96
- - Typography: Font files (.ttf, .woff2)
97
-
98
- **Appropriate for:** Templates, boilerplate code, document templates, images, icons, fonts, or any files meant to be copied or used in the final output.
99
-
100
- ---
101
-
102
- **Any unneeded directories can be deleted.** Not every skill requires all three types of resources.
103
- """
104
-
105
- EXAMPLE_SCRIPT = '''#!/usr/bin/env python3
106
- """
107
- Example helper script for {skill_name}
108
-
109
- This is a placeholder script that can be executed directly.
110
- Replace with actual implementation or delete if not needed.
111
-
112
- Example real scripts from other skills:
113
- - pdf/scripts/fill_fillable_fields.py - Fills PDF form fields
114
- - pdf/scripts/convert_pdf_to_images.py - Converts PDF pages to images
115
- """
116
-
117
- def main():
118
- print("This is an example script for {skill_name}")
119
- # TODO: Add actual script logic here
120
- # This could be data processing, file conversion, API calls, etc.
121
-
122
- if __name__ == "__main__":
123
- main()
124
- '''
125
-
126
- EXAMPLE_REFERENCE = """# Reference Documentation for {skill_title}
127
-
128
- This is a placeholder for detailed reference documentation.
129
- Replace with actual reference content or delete if not needed.
130
-
131
- Example real reference docs from other skills:
132
- - product-management/references/communication.md - Comprehensive guide for status updates
133
- - product-management/references/context_building.md - Deep-dive on gathering context
134
- - bigquery/references/ - API references and query examples
135
-
136
- ## When Reference Docs Are Useful
137
-
138
- Reference docs are ideal for:
139
- - Comprehensive API documentation
140
- - Detailed workflow guides
141
- - Complex multi-step processes
142
- - Information too lengthy for main SKILL.md
143
- - Content that's only needed for specific use cases
144
-
145
- ## Structure Suggestions
146
-
147
- ### API Reference Example
148
- - Overview
149
- - Authentication
150
- - Endpoints with examples
151
- - Error codes
152
- - Rate limits
153
-
154
- ### Workflow Guide Example
155
- - Prerequisites
156
- - Step-by-step instructions
157
- - Common patterns
158
- - Troubleshooting
159
- - Best practices
160
- """
161
-
162
- EXAMPLE_ASSET = """# Example Asset File
163
-
164
- This placeholder represents where asset files would be stored.
165
- Replace with actual asset files (templates, images, fonts, etc.) or delete if not needed.
166
-
167
- Asset files are NOT intended to be loaded into context, but rather used within
168
- the output Claude produces.
169
-
170
- Example asset files from other skills:
171
- - Brand guidelines: logo.png, slides_template.pptx
172
- - Frontend builder: hello-world/ directory with HTML/React boilerplate
173
- - Typography: custom-font.ttf, font-family.woff2
174
- - Data: sample_data.csv, test_dataset.json
175
-
176
- ## Common Asset Types
177
-
178
- - Templates: .pptx, .docx, boilerplate directories
179
- - Images: .png, .jpg, .svg, .gif
180
- - Fonts: .ttf, .otf, .woff, .woff2
181
- - Boilerplate code: Project directories, starter files
182
- - Icons: .ico, .svg
183
- - Data files: .csv, .json, .xml, .yaml
184
-
185
- Note: This is a text placeholder. Actual assets can be any file type.
186
- """
187
-
188
-
189
- def title_case_skill_name(skill_name):
190
- """Convert hyphenated skill name to Title Case for display."""
191
- return ' '.join(word.capitalize() for word in skill_name.split('-'))
192
-
193
-
194
- def init_skill(skill_name, path):
195
- """
196
- Initialize a new skill directory with template SKILL.md.
197
-
198
- Args:
199
- skill_name: Name of the skill
200
- path: Path where the skill directory should be created
201
-
202
- Returns:
203
- Path to created skill directory, or None if error
204
- """
205
- # Determine skill directory path
206
- skill_dir = Path(path).resolve() / skill_name
207
-
208
- # Check if directory already exists
209
- if skill_dir.exists():
210
- print(f"❌ Error: Skill directory already exists: {skill_dir}")
211
- return None
212
-
213
- # Create skill directory
214
- try:
215
- skill_dir.mkdir(parents=True, exist_ok=False)
216
- print(f"✅ Created skill directory: {skill_dir}")
217
- except Exception as e:
218
- print(f"❌ Error creating directory: {e}")
219
- return None
220
-
221
- # Create SKILL.md from template
222
- skill_title = title_case_skill_name(skill_name)
223
- skill_content = SKILL_TEMPLATE.format(
224
- skill_name=skill_name,
225
- skill_title=skill_title
226
- )
227
-
228
- skill_md_path = skill_dir / 'SKILL.md'
229
- try:
230
- skill_md_path.write_text(skill_content)
231
- print("✅ Created SKILL.md")
232
- except Exception as e:
233
- print(f"❌ Error creating SKILL.md: {e}")
234
- return None
235
-
236
- # Create resource directories with example files
237
- try:
238
- # Create scripts/ directory with example script
239
- scripts_dir = skill_dir / 'scripts'
240
- scripts_dir.mkdir(exist_ok=True)
241
- example_script = scripts_dir / 'example.py'
242
- example_script.write_text(EXAMPLE_SCRIPT.format(skill_name=skill_name))
243
- example_script.chmod(0o755)
244
- print("✅ Created scripts/example.py")
245
-
246
- # Create references/ directory with example reference doc
247
- references_dir = skill_dir / 'references'
248
- references_dir.mkdir(exist_ok=True)
249
- example_reference = references_dir / 'api_reference.md'
250
- example_reference.write_text(EXAMPLE_REFERENCE.format(skill_title=skill_title))
251
- print("✅ Created references/api_reference.md")
252
-
253
- # Create assets/ directory with example asset placeholder
254
- assets_dir = skill_dir / 'assets'
255
- assets_dir.mkdir(exist_ok=True)
256
- example_asset = assets_dir / 'example_asset.txt'
257
- example_asset.write_text(EXAMPLE_ASSET)
258
- print("✅ Created assets/example_asset.txt")
259
- except Exception as e:
260
- print(f"❌ Error creating resource directories: {e}")
261
- return None
262
-
263
- # Print next steps
264
- print(f"\n✅ Skill '{skill_name}' initialized successfully at {skill_dir}")
265
- print("\nNext steps:")
266
- print("1. Edit SKILL.md to complete the TODO items and update the description")
267
- print("2. Customize or delete the example files in scripts/, references/, and assets/")
268
- print("3. Run the validator when ready to check the skill structure")
269
-
270
- return skill_dir
271
-
272
-
273
- def main():
274
- if len(sys.argv) < 4 or sys.argv[2] != '--path':
275
- print("Usage: init_skill.py <skill-name> --path <path>")
276
- print("\nSkill name requirements:")
277
- print(" - Hyphen-case identifier (e.g., 'data-analyzer')")
278
- print(" - Lowercase letters, digits, and hyphens only")
279
- print(" - Max 40 characters")
280
- print(" - Must match directory name exactly")
281
- print("\nExamples:")
282
- print(" init_skill.py my-new-skill --path skills/public")
283
- print(" init_skill.py my-api-helper --path skills/private")
284
- print(" init_skill.py custom-skill --path /custom/location")
285
- sys.exit(1)
286
-
287
- skill_name = sys.argv[1]
288
- path = sys.argv[3]
289
-
290
- print(f"🚀 Initializing skill: {skill_name}")
291
- print(f" Location: {path}")
292
- print()
293
-
294
- result = init_skill(skill_name, path)
295
-
296
- if result:
297
- sys.exit(0)
298
- else:
299
- sys.exit(1)
300
-
301
-
302
- if __name__ == "__main__":
303
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
skill-creator/scripts/package_skill.py DELETED
@@ -1,110 +0,0 @@
1
- #!/usr/bin/env python3
2
- """
3
- Skill Packager - Creates a distributable zip file of a skill folder
4
-
5
- Usage:
6
- python utils/package_skill.py <path/to/skill-folder> [output-directory]
7
-
8
- Example:
9
- python utils/package_skill.py skills/public/my-skill
10
- python utils/package_skill.py skills/public/my-skill ./dist
11
- """
12
-
13
- import sys
14
- import zipfile
15
- from pathlib import Path
16
- from quick_validate import validate_skill
17
-
18
-
19
- def package_skill(skill_path, output_dir=None):
20
- """
21
- Package a skill folder into a zip file.
22
-
23
- Args:
24
- skill_path: Path to the skill folder
25
- output_dir: Optional output directory for the zip file (defaults to current directory)
26
-
27
- Returns:
28
- Path to the created zip file, or None if error
29
- """
30
- skill_path = Path(skill_path).resolve()
31
-
32
- # Validate skill folder exists
33
- if not skill_path.exists():
34
- print(f"❌ Error: Skill folder not found: {skill_path}")
35
- return None
36
-
37
- if not skill_path.is_dir():
38
- print(f"❌ Error: Path is not a directory: {skill_path}")
39
- return None
40
-
41
- # Validate SKILL.md exists
42
- skill_md = skill_path / "SKILL.md"
43
- if not skill_md.exists():
44
- print(f"❌ Error: SKILL.md not found in {skill_path}")
45
- return None
46
-
47
- # Run validation before packaging
48
- print("🔍 Validating skill...")
49
- valid, message = validate_skill(skill_path)
50
- if not valid:
51
- print(f"❌ Validation failed: {message}")
52
- print(" Please fix the validation errors before packaging.")
53
- return None
54
- print(f"✅ {message}\n")
55
-
56
- # Determine output location
57
- skill_name = skill_path.name
58
- if output_dir:
59
- output_path = Path(output_dir).resolve()
60
- output_path.mkdir(parents=True, exist_ok=True)
61
- else:
62
- output_path = Path.cwd()
63
-
64
- zip_filename = output_path / f"{skill_name}.zip"
65
-
66
- # Create the zip file
67
- try:
68
- with zipfile.ZipFile(zip_filename, 'w', zipfile.ZIP_DEFLATED) as zipf:
69
- # Walk through the skill directory
70
- for file_path in skill_path.rglob('*'):
71
- if file_path.is_file():
72
- # Calculate the relative path within the zip
73
- arcname = file_path.relative_to(skill_path.parent)
74
- zipf.write(file_path, arcname)
75
- print(f" Added: {arcname}")
76
-
77
- print(f"\n✅ Successfully packaged skill to: {zip_filename}")
78
- return zip_filename
79
-
80
- except Exception as e:
81
- print(f"❌ Error creating zip file: {e}")
82
- return None
83
-
84
-
85
- def main():
86
- if len(sys.argv) < 2:
87
- print("Usage: python utils/package_skill.py <path/to/skill-folder> [output-directory]")
88
- print("\nExample:")
89
- print(" python utils/package_skill.py skills/public/my-skill")
90
- print(" python utils/package_skill.py skills/public/my-skill ./dist")
91
- sys.exit(1)
92
-
93
- skill_path = sys.argv[1]
94
- output_dir = sys.argv[2] if len(sys.argv) > 2 else None
95
-
96
- print(f"📦 Packaging skill: {skill_path}")
97
- if output_dir:
98
- print(f" Output directory: {output_dir}")
99
- print()
100
-
101
- result = package_skill(skill_path, output_dir)
102
-
103
- if result:
104
- sys.exit(0)
105
- else:
106
- sys.exit(1)
107
-
108
-
109
- if __name__ == "__main__":
110
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
skill-creator/scripts/quick_validate.py DELETED
@@ -1,65 +0,0 @@
1
- #!/usr/bin/env python3
2
- """
3
- Quick validation script for skills - minimal version
4
- """
5
-
6
- import sys
7
- import os
8
- import re
9
- from pathlib import Path
10
-
11
- def validate_skill(skill_path):
12
- """Basic validation of a skill"""
13
- skill_path = Path(skill_path)
14
-
15
- # Check SKILL.md exists
16
- skill_md = skill_path / 'SKILL.md'
17
- if not skill_md.exists():
18
- return False, "SKILL.md not found"
19
-
20
- # Read and validate frontmatter
21
- content = skill_md.read_text()
22
- if not content.startswith('---'):
23
- return False, "No YAML frontmatter found"
24
-
25
- # Extract frontmatter
26
- match = re.match(r'^---\n(.*?)\n---', content, re.DOTALL)
27
- if not match:
28
- return False, "Invalid frontmatter format"
29
-
30
- frontmatter = match.group(1)
31
-
32
- # Check required fields
33
- if 'name:' not in frontmatter:
34
- return False, "Missing 'name' in frontmatter"
35
- if 'description:' not in frontmatter:
36
- return False, "Missing 'description' in frontmatter"
37
-
38
- # Extract name for validation
39
- name_match = re.search(r'name:\s*(.+)', frontmatter)
40
- if name_match:
41
- name = name_match.group(1).strip()
42
- # Check naming convention (hyphen-case: lowercase with hyphens)
43
- if not re.match(r'^[a-z0-9-]+$', name):
44
- return False, f"Name '{name}' should be hyphen-case (lowercase letters, digits, and hyphens only)"
45
- if name.startswith('-') or name.endswith('-') or '--' in name:
46
- return False, f"Name '{name}' cannot start/end with hyphen or contain consecutive hyphens"
47
-
48
- # Extract and validate description
49
- desc_match = re.search(r'description:\s*(.+)', frontmatter)
50
- if desc_match:
51
- description = desc_match.group(1).strip()
52
- # Check for angle brackets
53
- if '<' in description or '>' in description:
54
- return False, "Description cannot contain angle brackets (< or >)"
55
-
56
- return True, "Skill is valid!"
57
-
58
- if __name__ == "__main__":
59
- if len(sys.argv) != 2:
60
- print("Usage: python quick_validate.py <skill_directory>")
61
- sys.exit(1)
62
-
63
- valid, message = validate_skill(sys.argv[1])
64
- print(message)
65
- sys.exit(0 if valid else 1)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
skills DELETED
@@ -1 +0,0 @@
1
- Subproject commit 92990b2da655659d36b670ff922866bbb41c177c
 
 
submit_job.py DELETED
@@ -1,16 +0,0 @@
1
- from huggingface_hub import run_uv_job
2
-
3
- job = run_uv_job(
4
- script="https://huggingface.co/evalstate/demo-training-scripts/resolve/main/train_demo.py",
5
- flavor="t4-small",
6
- timeout="20m",
7
- secrets={"HF_TOKEN": "$HF_TOKEN"},
8
- )
9
-
10
- print(f"\n✅ Job submitted successfully!")
11
- print(f"Job ID: {job.job_id}")
12
- print(f"Monitor: https://huggingface.co/jobs/{job.job_id}")
13
- print(f"\nExpected time: ~10-15 minutes")
14
- print(f"Estimated cost: ~$0.20")
15
- print(f"\nThe job is running in the background!")
16
- print(f"📊 Once training starts, view metrics at: https://huggingface.co/spaces/evalstate/training-demo-dashboard")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
train_demo.py DELETED
@@ -1,93 +0,0 @@
1
- # /// script
2
- # dependencies = [
3
- # "trl>=0.12.0",
4
- # "peft>=0.7.0",
5
- # "transformers>=4.36.0",
6
- # "accelerate>=0.24.0",
7
- # "trackio",
8
- # ]
9
- # ///
10
-
11
- import trackio
12
- from datasets import load_dataset
13
- from peft import LoraConfig
14
- from trl import SFTTrainer, SFTConfig
15
-
16
- # Initialize Trackio for real-time monitoring
17
- trackio.init(
18
- project="qwen-demo-sft",
19
- space_id="evalstate/trackio-demo", # Will auto-create if doesn't exist
20
- config={
21
- "model": "Qwen/Qwen2.5-0.5B",
22
- "dataset": "trl-lib/Capybara",
23
- "dataset_size": 50,
24
- "learning_rate": 2e-5,
25
- "max_steps": 20,
26
- "demo": True,
27
- }
28
- )
29
-
30
- # Load dataset (only 50 examples for quick demo)
31
- dataset = load_dataset("trl-lib/Capybara", split="train[:50]")
32
- print(f"✅ Dataset loaded: {len(dataset)} examples")
33
- print(f"📝 Sample: {dataset[0]}")
34
-
35
- # Training configuration
36
- config = SFTConfig(
37
- # Hub settings - CRITICAL for saving results
38
- output_dir="qwen-demo-sft",
39
- push_to_hub=True,
40
- hub_model_id="evalstate/qwen-demo-sft",
41
- hub_strategy="end", # Push only at end for demo
42
-
43
- # Training parameters (minimal for quick demo)
44
- max_steps=20, # Very short training
45
- per_device_train_batch_size=2,
46
- gradient_accumulation_steps=2,
47
- learning_rate=2e-5,
48
-
49
- # Logging
50
- logging_steps=5,
51
- save_strategy="no", # Don't save checkpoints during training
52
-
53
- # Optimization
54
- warmup_steps=5,
55
- lr_scheduler_type="cosine",
56
-
57
- # Monitoring
58
- report_to="trackio",
59
- )
60
-
61
- # LoRA configuration (reduces memory usage)
62
- peft_config = LoraConfig(
63
- r=8, # Small rank for demo
64
- lora_alpha=16,
65
- lora_dropout=0.05,
66
- bias="none",
67
- task_type="CAUSAL_LM",
68
- target_modules=["q_proj", "v_proj"],
69
- )
70
-
71
- # Initialize trainer
72
- print("🚀 Initializing trainer...")
73
- trainer = SFTTrainer(
74
- model="Qwen/Qwen2.5-0.5B",
75
- train_dataset=dataset,
76
- args=config,
77
- peft_config=peft_config,
78
- )
79
-
80
- # Train
81
- print("🔥 Starting training (20 steps)...")
82
- trainer.train()
83
-
84
- # Push to Hub
85
- print("💾 Pushing model to Hub...")
86
- trainer.push_to_hub()
87
-
88
- # Finish Trackio tracking
89
- trackio.finish()
90
-
91
- print("✅ Training complete!")
92
- print(f"📦 Model: https://huggingface.co/evalstate/qwen-demo-sft")
93
- print(f"📊 Metrics: https://huggingface.co/spaces/evalstate/trackio-demo")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
train_minimal.py DELETED
@@ -1,28 +0,0 @@
1
- # /// script
2
- # dependencies = ["trl>=0.12.0", "peft>=0.7.0"]
3
- # ///
4
-
5
- from datasets import load_dataset
6
- from peft import LoraConfig
7
- from trl import SFTTrainer, SFTConfig
8
-
9
- # Load 50 examples
10
- dataset = load_dataset("trl-lib/Capybara", split="train[:50]")
11
-
12
- # Train with minimal config
13
- trainer = SFTTrainer(
14
- model="Qwen/Qwen2.5-0.5B",
15
- train_dataset=dataset,
16
- peft_config=LoraConfig(r=8, lora_alpha=16),
17
- args=SFTConfig(
18
- output_dir="output",
19
- push_to_hub=True,
20
- hub_model_id="evalstate/qwen-demo-minimal",
21
- max_steps=20,
22
- report_to="none", # No monitoring for quick demo
23
- )
24
- )
25
-
26
- trainer.train()
27
- trainer.push_to_hub()
28
- print("✅ Done! Model at: https://huggingface.co/evalstate/qwen-demo-minimal")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
train_organized.py DELETED
@@ -1,155 +0,0 @@
1
- # /// script
2
- # dependencies = ["trl>=0.12.0", "peft>=0.7.0", "trackio"]
3
- # ///
4
-
5
- import os
6
- from datasets import load_dataset
7
- from peft import LoraConfig
8
- from trl import SFTTrainer, SFTConfig
9
-
10
- # ============================================================================
11
- # Configuration
12
- # ============================================================================
13
-
14
- class Config:
15
- """Training configuration with environment overrides"""
16
- MODEL = os.getenv("MODEL", "Qwen/Qwen2.5-0.5B")
17
- DATASET = os.getenv("DATASET", "trl-lib/Capybara")
18
- DATASET_SPLIT = os.getenv("DATASET_SPLIT", "train[:50]")
19
- OUTPUT_REPO = os.getenv("OUTPUT_REPO", "evalstate/qwen-demo")
20
-
21
- # Training
22
- MAX_STEPS = int(os.getenv("MAX_STEPS", "20"))
23
- BATCH_SIZE = int(os.getenv("BATCH_SIZE", "2"))
24
- LEARNING_RATE = float(os.getenv("LEARNING_RATE", "2e-5"))
25
-
26
- # LoRA
27
- LORA_R = int(os.getenv("LORA_R", "8"))
28
- LORA_ALPHA = int(os.getenv("LORA_ALPHA", "16"))
29
-
30
- # Monitoring
31
- TRACKIO_SPACE = os.getenv("TRACKIO_SPACE", "evalstate/ml-experiments")
32
-
33
- # ============================================================================
34
- # Setup Functions
35
- # ============================================================================
36
-
37
- def setup_monitoring(config: Config):
38
- """Initialize Trackio for experiment tracking"""
39
- import trackio
40
-
41
- project_name = config.OUTPUT_REPO.split('/')[-1]
42
-
43
- trackio.init(
44
- project=project_name,
45
- space_id=config.TRACKIO_SPACE,
46
- config={
47
- "model": config.MODEL,
48
- "dataset": config.DATASET,
49
- "max_steps": config.MAX_STEPS,
50
- "learning_rate": config.LEARNING_RATE,
51
- "lora_r": config.LORA_R,
52
- }
53
- )
54
-
55
- print(f"📊 Trackio: https://huggingface.co/spaces/{config.TRACKIO_SPACE}")
56
-
57
- def load_and_validate_dataset(config: Config):
58
- """Load dataset and perform basic validation"""
59
- dataset = load_dataset(config.DATASET, split=config.DATASET_SPLIT)
60
-
61
- print(f"✅ Dataset loaded: {len(dataset)} examples")
62
- print(f" Columns: {dataset.column_names}")
63
-
64
- # Basic validation
65
- assert len(dataset) > 0, "Dataset is empty!"
66
- assert "messages" in dataset.column_names, "Expected 'messages' column"
67
-
68
- return dataset
69
-
70
- def create_training_config(config: Config) -> SFTConfig:
71
- """Create SFT training configuration"""
72
- return SFTConfig(
73
- # Output
74
- output_dir="output",
75
- push_to_hub=True,
76
- hub_model_id=config.OUTPUT_REPO,
77
- hub_strategy="end",
78
-
79
- # Training
80
- max_steps=config.MAX_STEPS,
81
- per_device_train_batch_size=config.BATCH_SIZE,
82
- gradient_accumulation_steps=2,
83
- learning_rate=config.LEARNING_RATE,
84
-
85
- # Optimization
86
- warmup_ratio=0.1,
87
- lr_scheduler_type="cosine",
88
-
89
- # Logging
90
- logging_steps=max(1, config.MAX_STEPS // 4),
91
- report_to="trackio",
92
- save_strategy="no",
93
- )
94
-
95
- def create_peft_config(config: Config) -> LoraConfig:
96
- """Create LoRA/PEFT configuration"""
97
- return LoraConfig(
98
- r=config.LORA_R,
99
- lora_alpha=config.LORA_ALPHA,
100
- lora_dropout=0.05,
101
- bias="none",
102
- task_type="CAUSAL_LM",
103
- )
104
-
105
- # ============================================================================
106
- # Main Training Flow
107
- # ============================================================================
108
-
109
- def main():
110
- """Main training pipeline"""
111
- config = Config()
112
-
113
- # Print configuration
114
- print("🚀 Training Configuration:")
115
- print(f" Model: {config.MODEL}")
116
- print(f" Dataset: {config.DATASET} ({config.DATASET_SPLIT})")
117
- print(f" Output: {config.OUTPUT_REPO}")
118
- print(f" Steps: {config.MAX_STEPS}")
119
- print(f" Learning Rate: {config.LEARNING_RATE}")
120
- print(f" LoRA r={config.LORA_R}, alpha={config.LORA_ALPHA}")
121
- print()
122
-
123
- # Setup
124
- setup_monitoring(config)
125
- dataset = load_and_validate_dataset(config)
126
- training_config = create_training_config(config)
127
- peft_config = create_peft_config(config)
128
-
129
- # Train
130
- print("🔥 Initializing trainer...")
131
- trainer = SFTTrainer(
132
- model=config.MODEL,
133
- train_dataset=dataset,
134
- args=training_config,
135
- peft_config=peft_config,
136
- )
137
-
138
- print("🏃 Training started...")
139
- trainer.train()
140
-
141
- print("💾 Pushing to Hub...")
142
- trainer.push_to_hub()
143
-
144
- # Cleanup
145
- import trackio
146
- trackio.finish()
147
-
148
- # Summary
149
- print()
150
- print("✅ Training Complete!")
151
- print(f"📦 Model: https://huggingface.co/{config.OUTPUT_REPO}")
152
- print(f"📊 Metrics: https://huggingface.co/spaces/{config.TRACKIO_SPACE}")
153
-
154
- if __name__ == "__main__":
155
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
train_production.py DELETED
@@ -1,95 +0,0 @@
1
- # /// script
2
- # dependencies = ["trl>=0.12.0", "peft>=0.7.0", "trackio"]
3
- # ///
4
-
5
- import os
6
- from datasets import load_dataset
7
- from peft import LoraConfig
8
- from trl import SFTTrainer, SFTConfig
9
-
10
- # Configuration from environment (with sensible defaults)
11
- MODEL = os.getenv("MODEL", "Qwen/Qwen2.5-0.5B")
12
- DATASET = os.getenv("DATASET", "trl-lib/Capybara")
13
- OUTPUT_REPO = os.getenv("OUTPUT_REPO", "evalstate/qwen-capybara-sft")
14
- MAX_STEPS = int(os.getenv("MAX_STEPS", "20"))
15
- BATCH_SIZE = int(os.getenv("BATCH_SIZE", "2"))
16
- LEARNING_RATE = float(os.getenv("LEARNING_RATE", "2e-5"))
17
- USE_TRACKIO = os.getenv("USE_TRACKIO", "true").lower() == "true"
18
-
19
- print(f"🚀 Training Configuration:")
20
- print(f" Model: {MODEL}")
21
- print(f" Dataset: {DATASET}")
22
- print(f" Output: {OUTPUT_REPO}")
23
- print(f" Max Steps: {MAX_STEPS}")
24
- print(f" Monitoring: {'Trackio' if USE_TRACKIO else 'None'}")
25
-
26
- # Setup monitoring if enabled
27
- if USE_TRACKIO:
28
- import trackio
29
- trackio.init(
30
- project=OUTPUT_REPO.split('/')[-1], # Use model name as project
31
- space_id="evalstate/ml-experiments", # Single space for all
32
- config={
33
- "model": MODEL,
34
- "dataset": DATASET,
35
- "max_steps": MAX_STEPS,
36
- "learning_rate": LEARNING_RATE,
37
- }
38
- )
39
-
40
- # Load dataset
41
- dataset = load_dataset(DATASET, split="train[:50]")
42
- print(f"✅ Loaded {len(dataset)} examples")
43
-
44
- # Configure training
45
- config = SFTConfig(
46
- output_dir="output",
47
- push_to_hub=True,
48
- hub_model_id=OUTPUT_REPO,
49
- hub_strategy="end",
50
-
51
- # Training params
52
- max_steps=MAX_STEPS,
53
- per_device_train_batch_size=BATCH_SIZE,
54
- gradient_accumulation_steps=2,
55
- learning_rate=LEARNING_RATE,
56
-
57
- # Optimization
58
- warmup_ratio=0.1,
59
- lr_scheduler_type="cosine",
60
-
61
- # Logging
62
- logging_steps=max(1, MAX_STEPS // 4),
63
- report_to="trackio" if USE_TRACKIO else "none",
64
-
65
- # No checkpoints for demo
66
- save_strategy="no",
67
- )
68
-
69
- # LoRA config
70
- peft_config = LoraConfig(
71
- r=8,
72
- lora_alpha=16,
73
- lora_dropout=0.05,
74
- bias="none",
75
- task_type="CAUSAL_LM",
76
- )
77
-
78
- # Train
79
- print("🔥 Starting training...")
80
- trainer = SFTTrainer(
81
- model=MODEL,
82
- train_dataset=dataset,
83
- args=config,
84
- peft_config=peft_config,
85
- )
86
-
87
- trainer.train()
88
- trainer.push_to_hub()
89
-
90
- if USE_TRACKIO:
91
- trackio.finish()
92
-
93
- print(f"✅ Complete! Model: https://huggingface.co/{OUTPUT_REPO}")
94
- if USE_TRACKIO:
95
- print(f"📊 Metrics: https://huggingface.co/spaces/evalstate/ml-experiments")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
train_production_documented.py DELETED
@@ -1,203 +0,0 @@
1
- # /// script
2
- # dependencies = ["trl>=0.12.0", "peft>=0.7.0", "trackio"]
3
- # ///
4
-
5
- """
6
- Production Training Script with Environment Configuration
7
-
8
- CUSTOMIZABLE PARAMETERS (via environment variables):
9
- MODEL - Model to fine-tune (default: Qwen/Qwen2.5-0.5B)
10
- DATASET - Dataset name on Hub (default: trl-lib/Capybara)
11
- OUTPUT_REPO - Where to save model (default: evalstate/qwen-capybara-sft)
12
- MAX_STEPS - Training steps (default: 20)
13
- BATCH_SIZE - Batch size per device (default: 2)
14
- LEARNING_RATE - Learning rate (default: 2e-5)
15
- USE_TRACKIO - Enable monitoring (default: true)
16
-
17
- EXAMPLE USAGE:
18
- # Default (quick demo):
19
- hf_jobs("uv", {"script": "train_production.py", "flavor": "t4-small"})
20
-
21
- # Custom settings:
22
- hf_jobs("uv", {
23
- "script": "train_production.py",
24
- "flavor": "a10g-large",
25
- "env": {
26
- "MODEL": "meta-llama/Llama-3.2-1B",
27
- "MAX_STEPS": "100",
28
- "LEARNING_RATE": "1e-5"
29
- }
30
- })
31
- """
32
-
33
- import os
34
- from datasets import load_dataset
35
- from peft import LoraConfig
36
- from trl import SFTTrainer, SFTConfig
37
-
38
- # ============================================================================
39
- # CONFIGURATION - Customize via environment variables
40
- # ============================================================================
41
-
42
- # Model Selection
43
- MODEL = os.getenv("MODEL", "Qwen/Qwen2.5-0.5B")
44
- # Common options:
45
- # - Qwen/Qwen2.5-0.5B (fast, demo)
46
- # - Qwen/Qwen2.5-3B (production)
47
- # - meta-llama/Llama-3.2-1B
48
- # - HuggingFaceTB/SmolLM2-1.7B
49
-
50
- # Dataset Selection
51
- DATASET = os.getenv("DATASET", "trl-lib/Capybara")
52
- # Use any conversational dataset with "messages" field
53
- # Examples: trl-lib/Capybara, HuggingFaceH4/ultrachat_200k
54
-
55
- # Output Configuration
56
- OUTPUT_REPO = os.getenv("OUTPUT_REPO", "evalstate/qwen-capybara-sft")
57
- # Must be in format: username/model-name
58
-
59
- # Training Parameters
60
- MAX_STEPS = int(os.getenv("MAX_STEPS", "20"))
61
- # Quick demo: 10-20 | Development: 100-500 | Production: 1000+
62
-
63
- BATCH_SIZE = int(os.getenv("BATCH_SIZE", "2"))
64
- # Adjust based on GPU memory: t4-small=2, a10g-large=4-8
65
-
66
- LEARNING_RATE = float(os.getenv("LEARNING_RATE", "2e-5"))
67
- # Typical range: 1e-5 to 5e-5
68
-
69
- # Monitoring
70
- USE_TRACKIO = os.getenv("USE_TRACKIO", "true").lower() == "true"
71
- # Set to "false" to disable real-time monitoring
72
-
73
- # ============================================================================
74
- # FIXED CONFIGURATION - Advanced users can modify these directly
75
- # ============================================================================
76
-
77
- # LoRA Configuration (reduces memory usage)
78
- LORA_R = 8 # Rank (higher = more parameters, better quality)
79
- LORA_ALPHA = 16 # Scaling factor (typically 2x LORA_R)
80
- LORA_DROPOUT = 0.05 # Dropout rate
81
-
82
- # Training Advanced
83
- GRADIENT_ACCUMULATION = 2 # Effective batch size = BATCH_SIZE * this
84
- WARMUP_RATIO = 0.1 # Percentage of steps for warmup
85
- LR_SCHEDULER = "cosine" # Learning rate schedule
86
-
87
- # Logging
88
- LOGGING_STEPS = None # Auto-calculated (MAX_STEPS // 4)
89
-
90
- # Trackio Space (single dashboard for all experiments)
91
- TRACKIO_SPACE = "evalstate/ml-experiments"
92
-
93
- # ============================================================================
94
- # TRAINING SCRIPT - No need to modify below this line
95
- # ============================================================================
96
-
97
- print("="*80)
98
- print("🚀 TRAINING CONFIGURATION")
99
- print("="*80)
100
- print(f"Model: {MODEL}")
101
- print(f"Dataset: {DATASET}")
102
- print(f"Output: {OUTPUT_REPO}")
103
- print(f"Max Steps: {MAX_STEPS}")
104
- print(f"Batch Size: {BATCH_SIZE}")
105
- print(f"Learning Rate: {LEARNING_RATE}")
106
- print(f"Monitoring: {'Trackio' if USE_TRACKIO else 'Disabled'}")
107
- print(f"LoRA: r={LORA_R}, alpha={LORA_ALPHA}")
108
- print("="*80)
109
- print()
110
-
111
- # Setup monitoring if enabled
112
- if USE_TRACKIO:
113
- import trackio
114
- trackio.init(
115
- project=OUTPUT_REPO.split('/')[-1], # Use model name as project
116
- space_id=TRACKIO_SPACE,
117
- config={
118
- "model": MODEL,
119
- "dataset": DATASET,
120
- "max_steps": MAX_STEPS,
121
- "batch_size": BATCH_SIZE,
122
- "learning_rate": LEARNING_RATE,
123
- "lora_r": LORA_R,
124
- }
125
- )
126
- print(f"📊 Trackio Dashboard: https://huggingface.co/spaces/{TRACKIO_SPACE}")
127
- print()
128
-
129
- # Load dataset (first 50 examples for demo)
130
- print("📦 Loading dataset...")
131
- dataset = load_dataset(DATASET, split="train[:50]")
132
- print(f"✅ Loaded {len(dataset)} examples")
133
- print()
134
-
135
- # Configure training
136
- logging_steps = LOGGING_STEPS or max(1, MAX_STEPS // 4)
137
-
138
- config = SFTConfig(
139
- # Output
140
- output_dir="output",
141
- push_to_hub=True,
142
- hub_model_id=OUTPUT_REPO,
143
- hub_strategy="end",
144
-
145
- # Training params (customizable via env vars)
146
- max_steps=MAX_STEPS,
147
- per_device_train_batch_size=BATCH_SIZE,
148
- gradient_accumulation_steps=GRADIENT_ACCUMULATION,
149
- learning_rate=LEARNING_RATE,
150
-
151
- # Optimization
152
- warmup_ratio=WARMUP_RATIO,
153
- lr_scheduler_type=LR_SCHEDULER,
154
-
155
- # Logging
156
- logging_steps=logging_steps,
157
- report_to="trackio" if USE_TRACKIO else "none",
158
-
159
- # No checkpoints for demo (saves time)
160
- save_strategy="no",
161
- )
162
-
163
- # LoRA configuration (reduces memory usage)
164
- peft_config = LoraConfig(
165
- r=LORA_R,
166
- lora_alpha=LORA_ALPHA,
167
- lora_dropout=LORA_DROPOUT,
168
- bias="none",
169
- task_type="CAUSAL_LM",
170
- )
171
-
172
- # Initialize trainer
173
- print("🔥 Initializing trainer...")
174
- trainer = SFTTrainer(
175
- model=MODEL,
176
- train_dataset=dataset,
177
- args=config,
178
- peft_config=peft_config,
179
- )
180
-
181
- # Train
182
- print("🏃 Training started...")
183
- print()
184
- trainer.train()
185
- print()
186
-
187
- # Save to Hub
188
- print("💾 Pushing model to Hub...")
189
- trainer.push_to_hub()
190
-
191
- # Finish monitoring
192
- if USE_TRACKIO:
193
- trackio.finish()
194
-
195
- # Summary
196
- print()
197
- print("="*80)
198
- print("✅ TRAINING COMPLETE!")
199
- print("="*80)
200
- print(f"📦 Model: https://huggingface.co/{OUTPUT_REPO}")
201
- if USE_TRACKIO:
202
- print(f"📊 Metrics: https://huggingface.co/spaces/{TRACKIO_SPACE}")
203
- print("="*80)