jiaxin
commited on
Commit
·
b86f414
1
Parent(s):
b3891ac
update README
Browse files
docs/vllm_deploy_guide.md
CHANGED
|
@@ -32,12 +32,12 @@ The following are recommended configurations; actual requirements should be adju
|
|
| 32 |
|
| 33 |
It is recommended to use a virtual environment (such as **venv**, **conda**, or **uv**) to avoid dependency conflicts.
|
| 34 |
|
| 35 |
-
We recommend installing vLLM in a fresh Python environment
|
| 36 |
|
| 37 |
```bash
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
uv pip install
|
| 41 |
```
|
| 42 |
|
| 43 |
Run the following command to start the vLLM server. vLLM will automatically download and cache the MiniMax-M2 model from Hugging Face.
|
|
|
|
| 32 |
|
| 33 |
It is recommended to use a virtual environment (such as **venv**, **conda**, or **uv**) to avoid dependency conflicts.
|
| 34 |
|
| 35 |
+
We recommend installing vLLM in a fresh Python environment:
|
| 36 |
|
| 37 |
```bash
|
| 38 |
+
uv venv
|
| 39 |
+
source .venv/bin/activate
|
| 40 |
+
uv pip install vllm --extra-index-url https://wheels.vllm.ai/nightly
|
| 41 |
```
|
| 42 |
|
| 43 |
Run the following command to start the vLLM server. vLLM will automatically download and cache the MiniMax-M2 model from Hugging Face.
|
docs/vllm_deploy_guide_cn.md
CHANGED
|
@@ -32,11 +32,11 @@
|
|
| 32 |
|
| 33 |
建议使用虚拟环境(如 **venv**、**conda**、**uv**)以避免依赖冲突。
|
| 34 |
|
| 35 |
-
建议在全新的 Python 环境中安装 vLLM
|
| 36 |
```bash
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
uv pip install
|
| 40 |
```
|
| 41 |
|
| 42 |
运行如下命令启动 vLLM 服务器,vLLM 会自动从 Huggingface 下载并缓存 MiniMax-M2 模型。
|
|
|
|
| 32 |
|
| 33 |
建议使用虚拟环境(如 **venv**、**conda**、**uv**)以避免依赖冲突。
|
| 34 |
|
| 35 |
+
建议在全新的 Python 环境中安装 vLLM:
|
| 36 |
```bash
|
| 37 |
+
uv venv
|
| 38 |
+
source .venv/bin/activate
|
| 39 |
+
uv pip install vllm --extra-index-url https://wheels.vllm.ai/nightly
|
| 40 |
```
|
| 41 |
|
| 42 |
运行如下命令启动 vLLM 服务器,vLLM 会自动从 Huggingface 下载并缓存 MiniMax-M2 模型。
|