davanstrien HF Staff Claude commited on
Commit
90889e4
·
1 Parent(s): f987b04

Switch DeepSeek-OCR scripts to vLLM nightly wheels

Browse files

PR #27247 is now merged to main, so we can use the official nightly wheels
instead of the fork branch.

Changes:
- Replace git+https://github.com/Isotr0py/vllm.git@deepseek-ocr with vllm
- Add [[tool.uv.index]] for https://wheels.vllm.ai
- Keep prerelease = "allow" for nightly builds
- Update docstrings to reflect PR merged status
- Update error messages (remove PR reference)

This should provide faster installation since we're using pre-built wheels
instead of compiling from source.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

Files changed (2) hide show
  1. deepseek-ocr-vllm-test.py +11 -7
  2. deepseek-ocr-vllm.py +7 -4
deepseek-ocr-vllm-test.py CHANGED
@@ -1,22 +1,26 @@
1
  # /// script
2
  # requires-python = ">=3.11"
3
  # dependencies = [
4
- # "vllm @ git+https://github.com/Isotr0py/vllm.git@deepseek-ocr",
5
  # "pillow",
6
  # "datasets",
7
  # "torch",
8
  # "huggingface-hub",
9
  # ]
10
  #
 
 
 
 
11
  # [tool.uv]
12
  # prerelease = "allow"
13
  # ///
14
 
15
  """
16
- Minimal test script for DeepSeek-OCR with vLLM (PR #27247 testing).
17
 
18
  This is an MVP script to validate that DeepSeek-OCR works with vLLM.
19
- Installs vLLM from Isotr0py's fork (PR #27247 branch) since not yet merged to main.
20
 
21
  Use this to test in Colab before building the full production script.
22
 
@@ -88,7 +92,7 @@ def make_message(image: Image.Image, prompt: str) -> List[Dict]:
88
 
89
  def main():
90
  parser = argparse.ArgumentParser(
91
- description="Test DeepSeek-OCR with vLLM (PR #27247)",
92
  formatter_class=argparse.RawDescriptionHelpFormatter,
93
  epilog="""
94
  Examples:
@@ -189,9 +193,9 @@ Examples:
189
  except Exception as e:
190
  logger.error(f"❌ Failed to load model: {e}")
191
  logger.error("\nThis might mean:")
192
- logger.error(" 1. PR #27247 is not merged/available in this vLLM version")
193
- logger.error(" 2. The model architecture is not recognized")
194
- logger.error(" 3. Missing dependencies")
195
  sys.exit(1)
196
 
197
  # Run inference
 
1
  # /// script
2
  # requires-python = ">=3.11"
3
  # dependencies = [
4
+ # "vllm",
5
  # "pillow",
6
  # "datasets",
7
  # "torch",
8
  # "huggingface-hub",
9
  # ]
10
  #
11
+ # [[tool.uv.index]]
12
+ # name = "vllm-nightly"
13
+ # url = "https://wheels.vllm.ai"
14
+ #
15
  # [tool.uv]
16
  # prerelease = "allow"
17
  # ///
18
 
19
  """
20
+ Minimal test script for DeepSeek-OCR with vLLM.
21
 
22
  This is an MVP script to validate that DeepSeek-OCR works with vLLM.
23
+ Installs vLLM from nightly wheels (PR #27247 now merged to main).
24
 
25
  Use this to test in Colab before building the full production script.
26
 
 
92
 
93
  def main():
94
  parser = argparse.ArgumentParser(
95
+ description="Test DeepSeek-OCR with vLLM",
96
  formatter_class=argparse.RawDescriptionHelpFormatter,
97
  epilog="""
98
  Examples:
 
193
  except Exception as e:
194
  logger.error(f"❌ Failed to load model: {e}")
195
  logger.error("\nThis might mean:")
196
+ logger.error(" 1. The model architecture is not recognized")
197
+ logger.error(" 2. Missing dependencies")
198
+ logger.error(" 3. Insufficient GPU memory")
199
  sys.exit(1)
200
 
201
  # Run inference
deepseek-ocr-vllm.py CHANGED
@@ -4,12 +4,16 @@
4
  # "datasets",
5
  # "huggingface-hub[hf_transfer]",
6
  # "pillow",
7
- # "vllm @ git+https://github.com/Isotr0py/vllm.git@deepseek-ocr",
8
  # "tqdm",
9
  # "toolz",
10
  # "torch",
11
  # ]
12
  #
 
 
 
 
13
  # [tool.uv]
14
  # prerelease = "allow"
15
  # ///
@@ -20,9 +24,8 @@ Convert document images to markdown using DeepSeek-OCR with vLLM.
20
  This script processes images through the DeepSeek-OCR model to extract
21
  text and structure as markdown, using vLLM for efficient batch processing.
22
 
23
- NOTE: Uses vLLM from PR #27247 (Isotr0py/vllm@deepseek-ocr branch) as
24
- DeepSeek-OCR support is not yet merged into main vLLM. First run will take
25
- 10-15 minutes to compile vLLM from source.
26
 
27
  Features:
28
  - Multiple resolution modes (Tiny/Small/Base/Large/Gundam)
 
4
  # "datasets",
5
  # "huggingface-hub[hf_transfer]",
6
  # "pillow",
7
+ # "vllm",
8
  # "tqdm",
9
  # "toolz",
10
  # "torch",
11
  # ]
12
  #
13
+ # [[tool.uv.index]]
14
+ # name = "vllm-nightly"
15
+ # url = "https://wheels.vllm.ai"
16
+ #
17
  # [tool.uv]
18
  # prerelease = "allow"
19
  # ///
 
24
  This script processes images through the DeepSeek-OCR model to extract
25
  text and structure as markdown, using vLLM for efficient batch processing.
26
 
27
+ NOTE: Uses vLLM nightly wheels from main (PR #27247 now merged). First run
28
+ may take a few minutes to download and install dependencies.
 
29
 
30
  Features:
31
  - Multiple resolution modes (Tiny/Small/Base/Large/Gundam)