Text Generation
Transformers
Safetensors
minimax_m2
conversational
custom_code
fp8
jiaxin commited on
Commit
b86f414
·
1 Parent(s): b3891ac

update README

Browse files
docs/vllm_deploy_guide.md CHANGED
@@ -32,12 +32,12 @@ The following are recommended configurations; actual requirements should be adju
32
 
33
  It is recommended to use a virtual environment (such as **venv**, **conda**, or **uv**) to avoid dependency conflicts.
34
 
35
- We recommend installing vLLM in a fresh Python environment. Since it has not been released yet, you need to manually build it from the source code:
36
 
37
  ```bash
38
- git clone https://github.com/vllm-project/vllm.git
39
- cd vllm
40
- uv pip install . --torch-backend=auto
41
  ```
42
 
43
  Run the following command to start the vLLM server. vLLM will automatically download and cache the MiniMax-M2 model from Hugging Face.
 
32
 
33
  It is recommended to use a virtual environment (such as **venv**, **conda**, or **uv**) to avoid dependency conflicts.
34
 
35
+ We recommend installing vLLM in a fresh Python environment:
36
 
37
  ```bash
38
+ uv venv
39
+ source .venv/bin/activate
40
+ uv pip install vllm --extra-index-url https://wheels.vllm.ai/nightly
41
  ```
42
 
43
  Run the following command to start the vLLM server. vLLM will automatically download and cache the MiniMax-M2 model from Hugging Face.
docs/vllm_deploy_guide_cn.md CHANGED
@@ -32,11 +32,11 @@
32
 
33
  建议使用虚拟环境(如 **venv**、**conda**、**uv**)以避免依赖冲突。
34
 
35
- 建议在全新的 Python 环境中安装 vLLM。由于尚未 release,需要从源码手动编译:
36
  ```bash
37
- git clone https://github.com/vllm-project/vllm.git
38
- cd vllm
39
- uv pip install . --torch-backend=auto
40
  ```
41
 
42
  运行如下命令启动 vLLM 服务器,vLLM 会自动从 Huggingface 下载并缓存 MiniMax-M2 模型。
 
32
 
33
  建议使用虚拟环境(如 **venv**、**conda**、**uv**)以避免依赖冲突。
34
 
35
+ 建议在全新的 Python 环境中安装 vLLM
36
  ```bash
37
+ uv venv
38
+ source .venv/bin/activate
39
+ uv pip install vllm --extra-index-url https://wheels.vllm.ai/nightly
40
  ```
41
 
42
  运行如下命令启动 vLLM 服务器,vLLM 会自动从 Huggingface 下载并缓存 MiniMax-M2 模型。