mattupson mit-ra commited on
Commit
877e393
·
verified ·
1 Parent(s): a274260

Adds readme as variation of https://huggingface.co/datasets/uv-scripts/classification/blob/main/README.md. (#3)

Browse files

- Adds readme as variation of https://huggingface.co/datasets/uv-scripts/classification/blob/main/README.md. (2a638ae08bbdbe9e618c7eb3c2f8c8204cbd1d77)


Co-authored-by: Raphael Mitsch <mit-ra@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +362 -1
README.md CHANGED
@@ -1,4 +1,365 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
3
  ---
4
- # Hugging Face Dataset Classification With Sieves
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ task_categories:
4
+ - zero-shot-classification
5
+ - text-classification
6
+ tags:
7
+ - uv-script
8
+ - classification
9
+ - structured-outputs
10
+ - zero-shot
11
  ---
12
+ # Hugging Face Dataset Classification With Sieves
13
+
14
+ GPU-accelerated text classification for Hugging Face datasets with guaranteed valid outputs through structured
15
+ generation with [Sieves](https://github.com/MantisAI/sieves/), [Outlines](https://github.com/dottxt-ai/outlines) and
16
+ Hugging Face zero-shot pipelines.
17
+
18
+ This is a modified version of https://huggingface.co/datasets/uv-scripts/classification.
19
+
20
+ ## 🚀 Quick Start
21
+
22
+ ```bash
23
+ # Classify IMDB reviews
24
+ uv run examples/classify-dataset.py \
25
+ --input-dataset stanfordnlp/imdb \
26
+ --column text \
27
+ --labels "positive,negative" \
28
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
29
+ --output-dataset user/imdb-classified
30
+ ```
31
+
32
+ That's it! No installation, no setup - just `uv run`.
33
+
34
+ ## 📋 Requirements
35
+
36
+ - **GPU Recommended**: Uses GPU-accelerated inference (CPU fallback available but slow)
37
+ - Python 3.12+
38
+ - UV (will handle all dependencies automatically)
39
+
40
+ **Python Package Dependencies** (automatically installed via UV):
41
+ - `sieves` with engines support (>= 0.17.4)
42
+ - `typer` (>= 0.12)
43
+ - `datasets`
44
+ - `huggingface-hub`
45
+
46
+ ## 🎯 Features
47
+
48
+ - **Guaranteed valid outputs** using structured generation with Outlines guided decoding
49
+ - **Zero-shot classification** without training data required
50
+ - **GPU-optimized** for maximum throughput and efficiency
51
+ - **Multi-label support** for documents with multiple applicable labels
52
+ - **Flexible model selection** - works with any instruction-tuned transformer model
53
+ - **Robust text handling** with preprocessing and validation
54
+ - **Automatic progress tracking** and detailed statistics
55
+ - **Direct Hub integration** - read and write datasets seamlessly
56
+ - **Label descriptions** support for providing context to improve accuracy
57
+ - **Optimized batching** with Sieves' automatic batch processing
58
+ - **Multiple guided backends** - supports `outlines` to handle any general language model on Hugging Face, and fast Hugging Face zero-shot classification pipelines
59
+
60
+ ## 💻 Usage
61
+
62
+ ### Basic Classification
63
+
64
+ ```bash
65
+ uv run examples/classify-dataset.py \
66
+ --input-dataset <dataset-id> \
67
+ --column <text-column> \
68
+ --labels <comma-separated-labels> \
69
+ --model <model-id> \
70
+ --output-dataset <output-id>
71
+ ```
72
+
73
+ ### Arguments
74
+
75
+ **Required:**
76
+
77
+ - `--input-dataset`: Hugging Face dataset ID (e.g., `stanfordnlp/imdb`, `user/my-dataset`)
78
+ - `--column`: Name of the text column to classify
79
+ - `--labels`: Comma-separated classification labels (e.g., `"spam,ham"`)
80
+ - `--model`: Model to use (e.g., `HuggingFaceTB/SmolLM-360M-Instruct`)
81
+ - `--output-dataset`: Where to save the classified dataset
82
+
83
+ **Optional:**
84
+
85
+ - `--label-descriptions`: Provide descriptions for each label to improve classification accuracy
86
+ - `--multi-label`: Enable multi-label classification mode (creates multi-hot encoded labels)
87
+ - `--split`: Dataset split to process (default: `train`)
88
+ - `--max-samples`: Limit samples for testing
89
+ - `--shuffle`: Shuffle dataset before selecting samples (useful for random sampling)
90
+ - `--shuffle-seed`: Random seed for shuffling
91
+ - `--batch-size`: Batch size for inference (default: 64)
92
+ - `--max-tokens`: Maximum tokens to generate per sample (default: 200)
93
+ - `--hf-token`: Hugging Face token (or use `HF_TOKEN` env var)
94
+
95
+ ### Label Descriptions
96
+
97
+ Provide context for your labels to improve classification accuracy:
98
+
99
+ ```bash
100
+ uv run examples/classify-dataset.py \
101
+ --input-dataset user/support-tickets \
102
+ --column content \
103
+ --labels "bug,feature,question,other" \
104
+ --label-descriptions "bug:something is broken,feature:request for new functionality,question:asking for help,other:anything else" \
105
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
106
+ --output-dataset user/tickets-classified
107
+ ```
108
+
109
+ The model uses these descriptions to better understand what each label represents, leading to more accurate classifications.
110
+
111
+ ### Multi-Label Classification
112
+
113
+ Enable multi-label mode for documents that can have multiple applicable labels:
114
+
115
+ ```bash
116
+ uv run examples/classify-dataset.py \
117
+ --input-dataset ag_news \
118
+ --column text \
119
+ --labels "world,sports,business,science" \
120
+ --multi-label \
121
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
122
+ --output-dataset user/ag-news-multilabel
123
+ ```
124
+
125
+ ## 📊 Examples
126
+
127
+ ### Sentiment Analysis
128
+
129
+ ```bash
130
+ uv run examples/classify-dataset.py \
131
+ --input-dataset stanfordnlp/imdb \
132
+ --column text \
133
+ --labels "positive,ambivalent,negative" \
134
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
135
+ --output-dataset user/imdb-sentiment
136
+ ```
137
+
138
+ ### Support Ticket Classification
139
+
140
+ ```bash
141
+ uv run examples/classify-dataset.py \
142
+ --input-dataset user/support-tickets \
143
+ --column content \
144
+ --labels "bug,feature_request,question,other" \
145
+ --label-descriptions "bug:code or product not working as expected,feature_request:asking for new functionality,question:seeking help or clarification,other:general comments or feedback" \
146
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
147
+ --output-dataset user/tickets-classified
148
+ ```
149
+
150
+ ### News Categorization
151
+
152
+ ```bash
153
+ uv run examples/classify-dataset.py \
154
+ --input-dataset ag_news \
155
+ --column text \
156
+ --labels "world,sports,business,tech" \
157
+ --model HuggingFaceTB/SmolLM-1.7B-Instruct \
158
+ --output-dataset user/ag-news-categorized
159
+ ```
160
+
161
+ ### Multi-Label News Classification
162
+
163
+ ```bash
164
+ uv run examples/classify-dataset.py \
165
+ --input-dataset ag_news \
166
+ --column text \
167
+ --labels "world,sports,business,tech" \
168
+ --multi-label \
169
+ --label-descriptions "world:global and international events,sports:sports and athletics,business:business and finance,tech:technology and innovation" \
170
+ --model HuggingFaceTB/SmolLM-1.7B-Instruct \
171
+ --output-dataset user/ag-news-multilabel
172
+ ```
173
+
174
+ This combines label descriptions with multi-label mode for comprehensive categorization of news articles.
175
+
176
+ ### ArXiv ML Research Classification
177
+
178
+ Classify academic papers into machine learning research areas:
179
+
180
+ ```bash
181
+ # Fast classification with random sampling
182
+ uv run examples/classify-dataset.py \
183
+ --input-dataset librarian-bots/arxiv-metadata-snapshot \
184
+ --column abstract \
185
+ --labels "llm,computer_vision,reinforcement_learning,optimization,theory,other" \
186
+ --label-descriptions "llm:language models and NLP,computer_vision:image and video processing,reinforcement_learning:RL and decision making,optimization:training and efficiency,theory:theoretical ML foundations,other:other ML topics" \
187
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
188
+ --output-dataset user/arxiv-ml-classified \
189
+ --split "train" \
190
+ --max-samples 100 \
191
+ --shuffle
192
+
193
+ # Multi-label for nuanced classification
194
+ uv run examples/classify-dataset.py \
195
+ --input-dataset librarian-bots/arxiv-metadata-snapshot \
196
+ --column abstract \
197
+ --labels "multimodal,agents,reasoning,safety,efficiency" \
198
+ --label-descriptions "multimodal:vision-language and cross-modal models,agents:autonomous agents and tool use,reasoning:reasoning and planning systems,safety:alignment and safety research,efficiency:model optimization and deployment" \
199
+ --multi-label \
200
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
201
+ --output-dataset user/arxiv-frontier-research \
202
+ --split "train[:1000]" \
203
+ --max-samples 50
204
+ ```
205
+
206
+ Multi-label mode is particularly valuable for academic abstracts where papers often span multiple topics and require careful analysis to determine all relevant research areas.
207
+
208
+ ## 🚀 Running Locally vs Cloud
209
+
210
+ This script is optimized to run locally on GPU-equipped machines:
211
+
212
+ ```bash
213
+ # Local execution with your GPU
214
+ uv run examples/classify-dataset.py \
215
+ --input-dataset stanfordnlp/imdb \
216
+ --column text \
217
+ --labels "positive,negative" \
218
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
219
+ --output-dataset user/imdb-classified
220
+ ```
221
+
222
+ For cloud deployment, you can use Hugging Face Spaces or other GPU services by adapting the command to your environment.
223
+
224
+ ## 🔧 Advanced Usage
225
+
226
+ ### Random Sampling
227
+
228
+ When working with ordered datasets, use `--shuffle` with `--max-samples` to get a representative sample:
229
+
230
+ ```bash
231
+ # Get 50 random reviews instead of the first 50
232
+ uv run examples/classify-dataset.py \
233
+ --input-dataset stanfordnlp/imdb \
234
+ --column text \
235
+ --labels "positive,negative" \
236
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
237
+ --output-dataset user/imdb-sample \
238
+ --max-samples 50 \
239
+ --shuffle \
240
+ --shuffle-seed 123 # For reproducibility
241
+ ```
242
+
243
+
244
+ ### Using Different Models
245
+
246
+ By default, this script works with any instruction-tuned model. Here are some recommended options:
247
+
248
+ ```bash
249
+ # Lightweight model for fast classification
250
+ uv run examples/classify-dataset.py \
251
+ --input-dataset user/my-dataset \
252
+ --column text \
253
+ --labels "A,B,C" \
254
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
255
+ --output-dataset user/classified
256
+
257
+ # Larger model for complex classification
258
+ uv run examples/classify-dataset.py \
259
+ --input-dataset user/legal-docs \
260
+ --column text \
261
+ --labels "contract,patent,brief,memo,other" \
262
+ --model HuggingFaceTB/SmolLM3-3B-Instruct \
263
+ --output-dataset user/legal-classified
264
+
265
+ # Specialized zero-shot classifier
266
+ uv run examples/classify-dataset.py \
267
+ --input-dataset user/my-dataset \
268
+ --column text \
269
+ --labels "A,B,C" \
270
+ --model MoritzLaurer/deberta-v3-large-zeroshot-v2.0 \
271
+ --output-dataset user/classified
272
+ ```
273
+
274
+ ### Large Datasets
275
+
276
+ Configure `--batch-size` for more effective batch processing with large datasets:
277
+
278
+ ```bash
279
+ uv run examples/classify-dataset.py \
280
+ --input-dataset user/huge-dataset \
281
+ --column text \
282
+ --labels "A,B,C" \
283
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
284
+ --output-dataset user/huge-classified \
285
+ --batch-size 128
286
+ ```
287
+
288
+
289
+ ## 🤝 How It Works
290
+
291
+ 1. **Sieves**: Provides a zero-shot task pipeline system for structured NLP workflows
292
+ 2. **Outlines**: Provides guided decoding to guarantee valid label outputs
293
+ 3. **UV**: Handles all dependencies automatically
294
+
295
+ The script loads your dataset, preprocesses texts, classifies each one with guaranteed valid outputs using Sieves'
296
+ `Classification` task, then saves the results as a new column in the output dataset.
297
+
298
+ ## 🐛 Troubleshooting
299
+
300
+ ### GPU Not Available
301
+
302
+ This script works best with a GPU but can run on CPU (much slower). To use GPU:
303
+
304
+ - Run on a machine with NVIDIA GPU
305
+ - Use cloud GPU instances (AWS, GCP, Azure, etc.)
306
+ - Use Hugging Face Spaces with GPU
307
+
308
+ ### Out of Memory
309
+
310
+ - Use a smaller model (e.g., SmolLM-360M instead of 3B)
311
+ - Reduce `--batch-size` (try 32, 16, or 8)
312
+ - Reduce `--max-tokens` for shorter generations
313
+
314
+ ### Invalid/Skipped Texts
315
+
316
+ - Texts shorter than 3 characters are skipped
317
+ - Empty or None values are marked as invalid
318
+ - Very long texts are truncated to 4000 characters
319
+
320
+ ### Classification Quality
321
+
322
+ - With Outlines guided decoding, outputs are guaranteed to be valid labels
323
+ - For better results, use clear and distinct label names
324
+ - Try `--label-descriptions` to provide context
325
+ - Use a larger model for nuanced tasks
326
+ - In multi-label mode, adjust the confidence threshold (defaults to 0.5)
327
+
328
+ ### Authentication Issues
329
+
330
+ If you see authentication errors:
331
+
332
+ - Run `huggingface-cli login` to cache your token
333
+ - Or set `export HF_TOKEN=your_token_here`
334
+ - Verify your token has read/write permissions on the Hub
335
+
336
+ ## 🔬 Advanced Workflows
337
+
338
+ ### Full Pipeline Workflow
339
+
340
+ Start with small tests, then run on the full dataset:
341
+
342
+ ```bash
343
+ # Step 1: Test with small sample
344
+ uv run examples/classify-dataset.py \
345
+ --input-dataset your-dataset \
346
+ --column text \
347
+ --labels "label1,label2,label3" \
348
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
349
+ --output-dataset user/test-classification \
350
+ --max-samples 100
351
+
352
+ # Step 2: If results look good, run on full dataset
353
+ uv run examples/classify-dataset.py \
354
+ --input-dataset your-dataset \
355
+ --column text \
356
+ --labels "label1,label2,label3" \
357
+ --label-descriptions "label1:description,label2:description,label3:description" \
358
+ --model HuggingFaceTB/SmolLM-360M-Instruct \
359
+ --output-dataset user/final-classification \
360
+ --batch-size 64
361
+ ```
362
+
363
+ ## 📝 License
364
+
365
+ This example is provided as part of the [Sieves](https://github.com/MantisAI/sieves/) project.