zhanghanxiao commited on
Commit
6eddd70
·
verified ·
1 Parent(s): 3721080

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -187,7 +187,7 @@ Here is the example to deploy the model with multiple GPU nodes, where the maste
187
  # step 1. start ray on all nodes
188
 
189
  # step 2. start vllm server only on node 0:
190
- vllm serve $MODEL_PATH --port $PORT --served-model-name my_model --trust-remote-code --tensor-parallel-size 8 --pipeline-parallel-size 4 --gpu-memory-utilization 0.85
191
 
192
 
193
  # This is only an example, please adjust arguments according to your actual environment.
 
187
  # step 1. start ray on all nodes
188
 
189
  # step 2. start vllm server only on node 0:
190
+ vllm serve $MODEL_PATH --port $PORT --served-model-name my_model --trust-remote-code --tensor-parallel-size 32 --gpu-memory-utilization 0.85
191
 
192
 
193
  # This is only an example, please adjust arguments according to your actual environment.