Dataset Viewer
Auto-converted to Parquet
image
image
markdown
string
inference_info
string
None
[{"column_name": "markdown", "model_id": "deepseek-ai/DeepSeek-OCR", "processing_date": "2025-10-23T13:06:15.810156", "resolution_mode": "gundam", "base_size": 1024, "image_size": 640, "crop_mode": true, "prompt": "<image>\n<|grounding|>Convert the document to markdown.", "script": "deepseek-ocr.py", "script_version": "1.0.0", "script_url": "https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr.py", "implementation": "transformers (sequential)"}]

Document OCR using DeepSeek-OCR

This dataset contains markdown-formatted OCR results from images in Alysonhower/test using DeepSeek-OCR.

Processing Details

Configuration

  • Image Column: image
  • Output Column: markdown
  • Dataset Split: train
  • Resolution Mode: gundam
  • Base Size: 1024
  • Image Size: 640
  • Crop Mode: True

Model Information

DeepSeek-OCR is a state-of-the-art document OCR model that excels at:

  • πŸ“ LaTeX equations - Mathematical formulas preserved in LaTeX format
  • πŸ“Š Tables - Extracted and formatted as HTML/markdown
  • πŸ“ Document structure - Headers, lists, and formatting maintained
  • πŸ–ΌοΈ Image grounding - Spatial layout and bounding box information
  • πŸ” Complex layouts - Multi-column and hierarchical structures
  • 🌍 Multilingual - Supports multiple languages

Resolution Modes

  • Tiny (512Γ—512): Fast processing, 64 vision tokens
  • Small (640Γ—640): Balanced speed/quality, 100 vision tokens
  • Base (1024Γ—1024): High quality, 256 vision tokens
  • Large (1280Γ—1280): Maximum quality, 400 vision tokens
  • Gundam (dynamic): Adaptive multi-tile processing for large documents

Dataset Structure

The dataset contains all original columns plus:

  • markdown: The extracted text in markdown format with preserved structure
  • inference_info: JSON list tracking all OCR models applied to this dataset

Usage

from datasets import load_dataset
import json

# Load the dataset
dataset = load_dataset("{{output_dataset_id}}", split="train")

# Access the markdown text
for example in dataset:
    print(example["markdown"])
    break

# View all OCR models applied to this dataset
inference_info = json.loads(dataset[0]["inference_info"])
for info in inference_info:
    print(f"Column: {{info['column_name']}} - Model: {{info['model_id']}}")

Reproduction

This dataset was generated using the uv-scripts/ocr DeepSeek OCR script:

uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr.py \
    Alysonhower/test \
    <output-dataset> \
    --resolution-mode gundam \
    --image-column image

Performance

  • Processing Speed: ~0.0 images/second
  • Processing Method: Sequential (Transformers API, no batching)

Note: This uses the official Transformers implementation. For faster batch processing, consider using the vLLM version once DeepSeek-OCR is officially supported by vLLM.

Generated with πŸ€– UV Scripts

Downloads last month
10