davanstrien HF Staff commited on
Commit
8388345
Β·
1 Parent(s): 70d9d22

Add olmOCR2 script and usage instructions

Browse files
Files changed (2) hide show
  1. README.md +23 -0
  2. olmocr2-vllm.py +568 -0
README.md CHANGED
@@ -124,6 +124,20 @@ Compact multilingual OCR using [rednote-hilab/dots.ocr](https://huggingface.co/r
124
  - 🎯 **Compact** - Only 1.7B parameters, efficient on smaller GPUs
125
  - πŸ”€ **Flexible prompts** - Switch between OCR, layout-all, and layout-only modes
126
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
127
 
128
  ## πŸ†• New Features
129
 
@@ -243,6 +257,15 @@ hf jobs uv run \
243
  --include-thinking \
244
  --shuffle
245
 
 
 
 
 
 
 
 
 
 
246
  # Private dataset with custom settings
247
  hf jobs uv run --flavor l40sx1 \
248
  --secrets HF_TOKEN \
 
124
  - 🎯 **Compact** - Only 1.7B parameters, efficient on smaller GPUs
125
  - πŸ”€ **Flexible prompts** - Switch between OCR, layout-all, and layout-only modes
126
 
127
+ ### olmOCR2 (`olmocr2-vllm.py`)
128
+
129
+ High-quality document OCR using [allenai/olmOCR-2-7B-1025-FP8](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8) optimized with GRPO reinforcement learning:
130
+
131
+ - 🎯 **High accuracy** - 82.4 ± 1.1 on olmOCR-Bench (84.9% on math)
132
+ - πŸ“ **LaTeX equations** - Mathematical formulas in LaTeX format
133
+ - πŸ“Š **Table extraction** - Structured table recognition
134
+ - πŸ“‘ **Multi-column layouts** - Complex document structures
135
+ - πŸ—œοΈ **FP8 quantized** - Efficient 8B model for faster inference
136
+ - πŸ“œ **Degraded scans** - Works well on old/historical documents
137
+ - πŸ“ **Long text extraction** - Headers, footers, and full document content
138
+ - 🧩 **YAML metadata** - Structured front matter (language, rotation, content type)
139
+ - πŸš€ **Based on Qwen2.5-VL-7B** - Fine-tuned with reinforcement learning
140
+
141
 
142
  ## πŸ†• New Features
143
 
 
257
  --include-thinking \
258
  --shuffle
259
 
260
+ # olmOCR2 - High-quality OCR with YAML metadata
261
+ hf jobs uv run \
262
+ --flavor a100-large \
263
+ --secrets HF_TOKEN \
264
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/olmocr2-vllm.py \
265
+ your-input-dataset your-output-dataset \
266
+ --batch-size 16 \
267
+ --max-samples 100
268
+
269
  # Private dataset with custom settings
270
  hf jobs uv run --flavor l40sx1 \
271
  --secrets HF_TOKEN \
olmocr2-vllm.py ADDED
@@ -0,0 +1,568 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets",
5
+ # "huggingface-hub[hf_transfer]",
6
+ # "pillow",
7
+ # "vllm",
8
+ # "tqdm",
9
+ # "toolz",
10
+ # "torch",
11
+ # "pyyaml", # For parsing YAML front matter
12
+ # ]
13
+ #
14
+ # ///
15
+
16
+ """
17
+ Convert document images to markdown using olmOCR-2 with vLLM.
18
+
19
+ This script processes images through the olmOCR-2-7B model to extract
20
+ text and structure as markdown, optimized for document understanding.
21
+
22
+ Features:
23
+ - LaTeX equation recognition
24
+ - HTML table extraction
25
+ - Document structure preservation (headers, lists, formatting)
26
+ - Rotation detection and correction metadata
27
+ - Figure and chart descriptions
28
+ - Natural reading order inference
29
+ - High-quality OCR for various document types
30
+
31
+ Model: allenai/olmOCR-2-7B-1025-FP8
32
+ Based on: Qwen2.5-VL-7B-Instruct fine-tuned on olmOCR-mix
33
+ """
34
+
35
+ import argparse
36
+ import base64
37
+ import io
38
+ import json
39
+ import logging
40
+ import os
41
+ import re
42
+ import sys
43
+ from datetime import datetime
44
+ from typing import Any, Dict, List, Union
45
+
46
+ import torch
47
+ import yaml
48
+ from datasets import load_dataset
49
+ from huggingface_hub import DatasetCard, login
50
+ from PIL import Image
51
+ from toolz import partition_all
52
+ from tqdm.auto import tqdm
53
+ from vllm import LLM, SamplingParams
54
+
55
+ logging.basicConfig(level=logging.INFO)
56
+ logger = logging.getLogger(__name__)
57
+
58
+ # olmOCR no-anchoring prompt (from olmocr/prompts/prompts.py:build_no_anchoring_v4_yaml_prompt)
59
+ OLMOCR_PROMPT = (
60
+ "Attached is one page of a document that you must process. "
61
+ "Just return the plain text representation of this document as if you were reading it naturally. "
62
+ "Convert equations to LateX and tables to HTML.\n"
63
+ "If there are any figures or charts, label them with the following markdown syntax "
64
+ "![Alt text describing the contents of the figure](page_startx_starty_width_height.png)\n"
65
+ "Return your output as markdown, with a front matter section on top specifying values for the "
66
+ "primary_language, is_rotation_valid, rotation_correction, is_table, and is_diagram parameters."
67
+ )
68
+
69
+
70
+ def check_cuda_availability():
71
+ """Check if CUDA is available and exit if not."""
72
+ if not torch.cuda.is_available():
73
+ logger.error("CUDA is not available. This script requires a GPU.")
74
+ logger.error("Please run on a machine with a CUDA-capable GPU.")
75
+ sys.exit(1)
76
+ else:
77
+ logger.info(f"CUDA is available. GPU: {torch.cuda.get_device_name(0)}")
78
+
79
+
80
+ def parse_yaml_frontmatter(text: str) -> tuple[dict, str]:
81
+ """
82
+ Parse YAML front matter from olmOCR output.
83
+
84
+ Expected format:
85
+ ---
86
+ primary_language: en
87
+ is_rotation_valid: true
88
+ rotation_correction: 0
89
+ is_table: false
90
+ is_diagram: false
91
+ ---
92
+ # Document content here...
93
+
94
+ Returns:
95
+ (metadata_dict, content_without_frontmatter)
96
+ """
97
+ # Match YAML front matter between --- markers
98
+ pattern = r'^---\s*\n(.*?)\n---\s*\n(.*)$'
99
+ match = re.match(pattern, text.strip(), re.DOTALL)
100
+
101
+ if match:
102
+ yaml_str = match.group(1)
103
+ content = match.group(2)
104
+ try:
105
+ metadata = yaml.safe_load(yaml_str)
106
+ return metadata or {}, content
107
+ except yaml.YAMLError as e:
108
+ logger.warning(f"Failed to parse YAML front matter: {e}")
109
+ return {}, text
110
+ else:
111
+ # No front matter found, return empty metadata
112
+ logger.warning("No YAML front matter found in output")
113
+ return {}, text
114
+
115
+
116
+ def make_ocr_message(
117
+ image: Union[Image.Image, Dict[str, Any], str],
118
+ prompt: str = OLMOCR_PROMPT,
119
+ ) -> List[Dict]:
120
+ """Create chat message for olmOCR processing."""
121
+ # Convert to PIL Image if needed
122
+ if isinstance(image, Image.Image):
123
+ pil_img = image
124
+ elif isinstance(image, dict) and "bytes" in image:
125
+ pil_img = Image.open(io.BytesIO(image["bytes"]))
126
+ elif isinstance(image, str):
127
+ pil_img = Image.open(image)
128
+ else:
129
+ raise ValueError(f"Unsupported image type: {type(image)}")
130
+
131
+ # Convert to base64 data URI
132
+ buf = io.BytesIO()
133
+ pil_img.save(buf, format="PNG")
134
+ data_uri = f"data:image/png;base64,{base64.b64encode(buf.getvalue()).decode()}"
135
+
136
+ # Return message in vLLM format
137
+ return [
138
+ {
139
+ "role": "user",
140
+ "content": [
141
+ {"type": "image_url", "image_url": {"url": data_uri}},
142
+ {"type": "text", "text": prompt},
143
+ ],
144
+ }
145
+ ]
146
+
147
+
148
+ def create_dataset_card(
149
+ source_dataset: str,
150
+ model: str,
151
+ num_samples: int,
152
+ processing_time: str,
153
+ batch_size: int,
154
+ max_model_len: int,
155
+ max_tokens: int,
156
+ gpu_memory_utilization: float,
157
+ image_column: str = "image",
158
+ split: str = "train",
159
+ ) -> str:
160
+ """Create a dataset card documenting the OCR process."""
161
+ model_name = model.split("/")[-1]
162
+
163
+ return f"""---
164
+ viewer: false
165
+ tags:
166
+ - ocr
167
+ - document-processing
168
+ - olmocr
169
+ - markdown
170
+ - uv-script
171
+ - generated
172
+ ---
173
+
174
+ # Document OCR using {model_name}
175
+
176
+ This dataset contains markdown-formatted OCR results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using olmOCR-2-7B.
177
+
178
+ ## Processing Details
179
+
180
+ - **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
181
+ - **Model**: [{model}](https://huggingface.co/{model})
182
+ - **Number of Samples**: {num_samples:,}
183
+ - **Processing Time**: {processing_time}
184
+ - **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
185
+
186
+ ### Configuration
187
+
188
+ - **Image Column**: `{image_column}`
189
+ - **Output Column**: `markdown`
190
+ - **Dataset Split**: `{split}`
191
+ - **Batch Size**: {batch_size}
192
+ - **Max Model Length**: {max_model_len:,} tokens
193
+ - **Max Output Tokens**: {max_tokens:,}
194
+ - **GPU Memory Utilization**: {gpu_memory_utilization:.1%}
195
+
196
+ ## Model Information
197
+
198
+ olmOCR-2-7B is a high-quality document OCR model based on Qwen2.5-VL-7B-Instruct, fine-tuned on olmOCR-mix-1025 dataset and optimized with GRPO reinforcement learning.
199
+
200
+ Key features:
201
+ - πŸ“ **LaTeX equations** - Mathematical formulas in LaTeX format
202
+ - πŸ“Š **HTML tables** - Structured table extraction
203
+ - πŸ“ **Document structure** - Headers, lists, formatting preserved
204
+ - πŸ–ΌοΈ **Figure descriptions** - Charts and figures labeled with descriptions
205
+ - πŸ”„ **Rotation detection** - Metadata about document orientation
206
+ - πŸ“‘ **Natural reading order** - Handles multi-column and complex layouts
207
+ - 🎯 **High accuracy** - Scores 82.4 ± 1.1 on olmOCR-Bench
208
+
209
+ ## Output Format
210
+
211
+ Each row contains:
212
+ - Original image from source dataset
213
+ - `markdown`: Extracted document content in markdown format
214
+ - `olmocr_metadata`: JSON with document metadata (language, rotation, table/diagram flags)
215
+
216
+ ## Columns
217
+
218
+ - `{image_column}`: Original document image
219
+ - `markdown`: Extracted text and structure in markdown
220
+ - `olmocr_metadata`: Document metadata (primary_language, is_rotation_valid, rotation_correction, is_table, is_diagram)
221
+ - `inference_info`: Processing metadata (model, script version, timestamp)
222
+
223
+ ## Reproduction
224
+
225
+ ```bash
226
+ # Using HF Jobs (recommended)
227
+ hf jobs uv run --flavor l4x1 \\
228
+ -s HF_TOKEN \\
229
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/olmocr2-vllm.py \\
230
+ {source_dataset} \\
231
+ your-username/output-dataset
232
+
233
+ # Local with GPU
234
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/olmocr2-vllm.py \\
235
+ {source_dataset} \\
236
+ your-username/output-dataset
237
+ ```
238
+
239
+ ## Citation
240
+
241
+ ```bibtex
242
+ @misc{{olmocr,
243
+ title={{{{olmOCR: Unlocking Trillions of Tokens in PDFs with Vision Language Models}}}},
244
+ author={{Jake Poznanski and Jon Borchardt and Jason Dunkelberger and Regan Huff and Daniel Lin and Aman Rangapur and Christopher Wilhelm and Kyle Lo and Luca Soldaini}},
245
+ year={{2025}},
246
+ eprint={{2502.18443}},
247
+ archivePrefix={{arXiv}},
248
+ primaryClass={{cs.CL}},
249
+ url={{https://arxiv.org/abs/2502.18443}},
250
+ }}
251
+ ```
252
+
253
+ ---
254
+ *Generated with [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr)*
255
+ """
256
+
257
+
258
+ def main(
259
+ input_dataset: str,
260
+ output_dataset: str,
261
+ image_column: str = "image",
262
+ batch_size: int = 16,
263
+ model: str = "allenai/olmOCR-2-7B-1025-FP8",
264
+ max_model_len: int = 16384,
265
+ max_tokens: int = 8192,
266
+ temperature: float = 0.1,
267
+ gpu_memory_utilization: float = 0.8,
268
+ hf_token: str = None,
269
+ split: str = "train",
270
+ max_samples: int = None,
271
+ private: bool = False,
272
+ shuffle: bool = False,
273
+ seed: int = 42,
274
+ ):
275
+ """
276
+ Process a dataset of document images through olmOCR-2 to extract markdown.
277
+
278
+ Args:
279
+ input_dataset: HuggingFace dataset ID containing images
280
+ output_dataset: HuggingFace dataset ID for output
281
+ image_column: Column name containing images
282
+ batch_size: Number of images to process at once
283
+ model: HuggingFace model ID for olmOCR
284
+ max_model_len: Maximum context length
285
+ max_tokens: Maximum tokens to generate per image
286
+ temperature: Sampling temperature (0.1 for deterministic)
287
+ gpu_memory_utilization: Fraction of GPU memory to use
288
+ hf_token: HuggingFace token for authentication
289
+ split: Dataset split to process
290
+ max_samples: Limit number of samples (for testing)
291
+ private: Make output dataset private
292
+ shuffle: Shuffle dataset before processing
293
+ seed: Random seed for shuffling
294
+ """
295
+ import time
296
+ start_time = time.time()
297
+
298
+ # Check CUDA availability
299
+ check_cuda_availability()
300
+
301
+ # Login to HuggingFace if token provided
302
+ if hf_token:
303
+ login(token=hf_token)
304
+ elif "HF_TOKEN" in os.environ:
305
+ login(token=os.environ["HF_TOKEN"])
306
+
307
+ # Load dataset
308
+ logger.info(f"Loading dataset: {input_dataset}")
309
+ ds = load_dataset(input_dataset, split=split)
310
+
311
+ # Shuffle if requested
312
+ if shuffle:
313
+ logger.info(f"Shuffling dataset with seed {seed}")
314
+ ds = ds.shuffle(seed=seed)
315
+
316
+ # Limit samples if requested
317
+ if max_samples:
318
+ logger.info(f"Limiting to {max_samples} samples")
319
+ ds = ds.select(range(min(max_samples, len(ds))))
320
+
321
+ logger.info(f"Processing {len(ds)} samples")
322
+
323
+ # Check if output column already exists with inference info
324
+ output_column_name = "markdown"
325
+ metadata_column_name = "olmocr_metadata"
326
+ inference_info_column = "inference_info"
327
+
328
+ # Initialize LLM
329
+ logger.info(f"Initializing vLLM with model: {model}")
330
+ llm = LLM(
331
+ model=model,
332
+ max_model_len=max_model_len,
333
+ gpu_memory_utilization=gpu_memory_utilization,
334
+ limit_mm_per_prompt={"image": 1},
335
+ )
336
+
337
+ # Sampling parameters - olmOCR uses low temperature for deterministic output
338
+ sampling_params = SamplingParams(
339
+ temperature=temperature,
340
+ max_tokens=max_tokens,
341
+ stop=["<|im_end|>", "<|endoftext|>"],
342
+ )
343
+
344
+ # Process in batches
345
+ all_outputs = []
346
+ all_metadata = []
347
+
348
+ for batch in tqdm(
349
+ list(partition_all(batch_size, ds)),
350
+ desc="Processing batches",
351
+ ):
352
+ # Create messages for batch
353
+ messages = [make_ocr_message(item[image_column]) for item in batch]
354
+
355
+ # Run inference
356
+ outputs = llm.chat(messages, sampling_params=sampling_params)
357
+
358
+ # Extract text and parse YAML front matter
359
+ for output in outputs:
360
+ response_text = output.outputs[0].text
361
+ metadata, content = parse_yaml_frontmatter(response_text)
362
+ all_outputs.append(content)
363
+ all_metadata.append(json.dumps(metadata))
364
+
365
+ # Add results to dataset
366
+ ds = ds.add_column(output_column_name, all_outputs)
367
+ ds = ds.add_column(metadata_column_name, all_metadata)
368
+
369
+ # Add inference information
370
+ inference_info = json.dumps({
371
+ "model": model,
372
+ "script": "olmocr2-vllm.py",
373
+ "version": "1.0.0",
374
+ "timestamp": datetime.now().isoformat(),
375
+ "batch_size": batch_size,
376
+ "max_tokens": max_tokens,
377
+ "temperature": temperature,
378
+ })
379
+
380
+ # Handle existing inference_info column
381
+ if inference_info_column in ds.column_names:
382
+ # Parse existing, append new model info
383
+ def update_inference_info(example):
384
+ try:
385
+ existing = json.loads(example[inference_info_column])
386
+ if not isinstance(existing, list):
387
+ existing = [existing]
388
+ except (json.JSONDecodeError, KeyError):
389
+ existing = []
390
+
391
+ existing.append(json.loads(inference_info))
392
+ return {inference_info_column: json.dumps(existing)}
393
+
394
+ ds = ds.map(update_inference_info)
395
+ else:
396
+ ds = ds.add_column(inference_info_column, [inference_info] * len(ds))
397
+
398
+ # Calculate processing time
399
+ elapsed_time = time.time() - start_time
400
+ hours = int(elapsed_time // 3600)
401
+ minutes = int((elapsed_time % 3600) // 60)
402
+ seconds = int(elapsed_time % 60)
403
+ processing_time = f"{hours}h {minutes}m {seconds}s"
404
+
405
+ # Create and save dataset card
406
+ card_content = create_dataset_card(
407
+ source_dataset=input_dataset,
408
+ model=model,
409
+ num_samples=len(ds),
410
+ processing_time=processing_time,
411
+ batch_size=batch_size,
412
+ max_model_len=max_model_len,
413
+ max_tokens=max_tokens,
414
+ gpu_memory_utilization=gpu_memory_utilization,
415
+ image_column=image_column,
416
+ split=split,
417
+ )
418
+
419
+ # Push to hub
420
+ logger.info(f"Pushing to HuggingFace Hub: {output_dataset}")
421
+ ds.push_to_hub(
422
+ output_dataset,
423
+ private=private,
424
+ )
425
+
426
+ # Update dataset card
427
+ card = DatasetCard(card_content)
428
+ card.push_to_hub(output_dataset)
429
+
430
+ logger.info(f"βœ“ Processing complete!")
431
+ logger.info(f"βœ“ Dataset: https://huggingface.co/datasets/{output_dataset}")
432
+ logger.info(f"βœ“ Processing time: {processing_time}")
433
+ logger.info(f"βœ“ Samples processed: {len(ds):,}")
434
+
435
+
436
+ if __name__ == "__main__":
437
+ parser = argparse.ArgumentParser(
438
+ description="Convert document images to markdown using olmOCR-2",
439
+ formatter_class=argparse.RawDescriptionHelpFormatter,
440
+ epilog="""
441
+ Examples:
442
+
443
+ 1. Basic OCR on a dataset:
444
+ uv run olmocr2-vllm.py input-dataset output-dataset
445
+
446
+ 2. Test with first 10 samples:
447
+ uv run olmocr2-vllm.py input-dataset output-dataset --max-samples 10
448
+
449
+ 3. Process with custom batch size:
450
+ uv run olmocr2-vllm.py input-dataset output-dataset --batch-size 8
451
+
452
+ 4. Custom image column:
453
+ uv run olmocr2-vllm.py input-dataset output-dataset --image-column page_image
454
+
455
+ 5. Private output dataset:
456
+ uv run olmocr2-vllm.py input-dataset output-dataset --private
457
+
458
+ 6. Random sampling:
459
+ uv run olmocr2-vllm.py input-dataset output-dataset --max-samples 100 --shuffle
460
+
461
+ 7. Running on HuggingFace Jobs:
462
+ hf jobs uv run --flavor l4x1 \\
463
+ -s HF_TOKEN \\
464
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/olmocr2-vllm.py \\
465
+ input-dataset output-dataset
466
+
467
+ 8. Real example with historical documents:
468
+ hf jobs uv run --flavor l4x1 \\
469
+ -s HF_TOKEN \\
470
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/olmocr2-vllm.py \\
471
+ NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset \\
472
+ your-username/handbooks-olmocr \\
473
+ --max-samples 100 \\
474
+ --shuffle
475
+ """,
476
+ )
477
+
478
+ parser.add_argument("input_dataset", help="Input HuggingFace dataset ID")
479
+ parser.add_argument("output_dataset", help="Output HuggingFace dataset ID")
480
+ parser.add_argument(
481
+ "--image-column",
482
+ default="image",
483
+ help="Column name containing images (default: image)",
484
+ )
485
+ parser.add_argument(
486
+ "--batch-size",
487
+ type=int,
488
+ default=16,
489
+ help="Batch size for processing (default: 16)",
490
+ )
491
+ parser.add_argument(
492
+ "--model",
493
+ default="allenai/olmOCR-2-7B-1025-FP8",
494
+ help="Model to use (default: allenai/olmOCR-2-7B-1025-FP8)",
495
+ )
496
+ parser.add_argument(
497
+ "--max-model-len",
498
+ type=int,
499
+ default=16384,
500
+ help="Maximum model context length (default: 16384)",
501
+ )
502
+ parser.add_argument(
503
+ "--max-tokens",
504
+ type=int,
505
+ default=8192,
506
+ help="Maximum tokens to generate (default: 8192)",
507
+ )
508
+ parser.add_argument(
509
+ "--temperature",
510
+ type=float,
511
+ default=0.1,
512
+ help="Sampling temperature (default: 0.1 for deterministic output)",
513
+ )
514
+ parser.add_argument(
515
+ "--gpu-memory-utilization",
516
+ type=float,
517
+ default=0.8,
518
+ help="GPU memory utilization (default: 0.8)",
519
+ )
520
+ parser.add_argument(
521
+ "--hf-token",
522
+ help="HuggingFace token (or set HF_TOKEN env var)",
523
+ )
524
+ parser.add_argument(
525
+ "--split",
526
+ default="train",
527
+ help="Dataset split to process (default: train)",
528
+ )
529
+ parser.add_argument(
530
+ "--max-samples",
531
+ type=int,
532
+ help="Maximum number of samples to process (for testing)",
533
+ )
534
+ parser.add_argument(
535
+ "--private",
536
+ action="store_true",
537
+ help="Make output dataset private",
538
+ )
539
+ parser.add_argument(
540
+ "--shuffle",
541
+ action="store_true",
542
+ help="Shuffle dataset before processing",
543
+ )
544
+ parser.add_argument(
545
+ "--seed",
546
+ type=int,
547
+ default=42,
548
+ help="Random seed for shuffling (default: 42)",
549
+ )
550
+
551
+ args = parser.parse_args()
552
+ main(
553
+ input_dataset=args.input_dataset,
554
+ output_dataset=args.output_dataset,
555
+ image_column=args.image_column,
556
+ batch_size=args.batch_size,
557
+ model=args.model,
558
+ max_model_len=args.max_model_len,
559
+ max_tokens=args.max_tokens,
560
+ temperature=args.temperature,
561
+ gpu_memory_utilization=args.gpu_memory_utilization,
562
+ hf_token=args.hf_token,
563
+ split=args.split,
564
+ max_samples=args.max_samples,
565
+ private=args.private,
566
+ shuffle=args.shuffle,
567
+ seed=args.seed,
568
+ )