Dataset Viewer
image
image | markdown
string | inference_info
string |
|---|---|---|
None
|
[{"column_name": "markdown", "model_id": "deepseek-ai/DeepSeek-OCR", "processing_date": "2025-10-23T13:06:15.810156", "resolution_mode": "gundam", "base_size": 1024, "image_size": 640, "crop_mode": true, "prompt": "<image>\n<|grounding|>Convert the document to markdown.", "script": "deepseek-ocr.py", "script_version": "1.0.0", "script_url": "https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr.py", "implementation": "transformers (sequential)"}]
|
Document OCR using DeepSeek-OCR
This dataset contains markdown-formatted OCR results from images in Alysonhower/test using DeepSeek-OCR.
Processing Details
- Source Dataset: Alysonhower/test
- Model: deepseek-ai/DeepSeek-OCR
- Number of Samples: 1
- Processing Time: 1.5 minutes
- Processing Date: 2025-10-23 13:06 UTC
Configuration
- Image Column:
image - Output Column:
markdown - Dataset Split:
train - Resolution Mode: gundam
- Base Size: 1024
- Image Size: 640
- Crop Mode: True
Model Information
DeepSeek-OCR is a state-of-the-art document OCR model that excels at:
- π LaTeX equations - Mathematical formulas preserved in LaTeX format
- π Tables - Extracted and formatted as HTML/markdown
- π Document structure - Headers, lists, and formatting maintained
- πΌοΈ Image grounding - Spatial layout and bounding box information
- π Complex layouts - Multi-column and hierarchical structures
- π Multilingual - Supports multiple languages
Resolution Modes
- Tiny (512Γ512): Fast processing, 64 vision tokens
- Small (640Γ640): Balanced speed/quality, 100 vision tokens
- Base (1024Γ1024): High quality, 256 vision tokens
- Large (1280Γ1280): Maximum quality, 400 vision tokens
- Gundam (dynamic): Adaptive multi-tile processing for large documents
Dataset Structure
The dataset contains all original columns plus:
markdown: The extracted text in markdown format with preserved structureinference_info: JSON list tracking all OCR models applied to this dataset
Usage
from datasets import load_dataset
import json
# Load the dataset
dataset = load_dataset("{{output_dataset_id}}", split="train")
# Access the markdown text
for example in dataset:
print(example["markdown"])
break
# View all OCR models applied to this dataset
inference_info = json.loads(dataset[0]["inference_info"])
for info in inference_info:
print(f"Column: {{info['column_name']}} - Model: {{info['model_id']}}")
Reproduction
This dataset was generated using the uv-scripts/ocr DeepSeek OCR script:
uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr.py \
Alysonhower/test \
<output-dataset> \
--resolution-mode gundam \
--image-column image
Performance
- Processing Speed: ~0.0 images/second
- Processing Method: Sequential (Transformers API, no batching)
Note: This uses the official Transformers implementation. For faster batch processing, consider using the vLLM version once DeepSeek-OCR is officially supported by vLLM.
Generated with π€ UV Scripts
- Downloads last month
- 10