xuebi
commited on
Commit
·
30a4a95
1
Parent(s):
35c5c79
update: transformers docs
Browse filesSigned-off-by: xuebi <xuebi@minimaxi.com>
- README.md +3 -0
- docs/transformers_deploy_guide.md +90 -0
- docs/transformers_deploy_guide_cn.md +91 -0
README.md
CHANGED
|
@@ -179,6 +179,9 @@ We recommend using [vLLM](https://docs.vllm.ai/en/stable/) to serve MiniMax-M2.
|
|
| 179 |
|
| 180 |
We recommend using [MLX-LM](https://github.com/ml-explore/mlx-lm) to serve MiniMax-M2. Please refer to our [MLX Deployment Guide](https://huggingface.co/MiniMaxAI/MiniMax-M2/blob/main/docs/mlx_deploy_guide.md) for more details.
|
| 181 |
|
|
|
|
|
|
|
|
|
|
| 182 |
|
| 183 |
### Inference Parameters
|
| 184 |
We recommend using the following parameters for best performance: `temperature=1.0`, `top_p = 0.95`, `top_k = 40`.
|
|
|
|
| 179 |
|
| 180 |
We recommend using [MLX-LM](https://github.com/ml-explore/mlx-lm) to serve MiniMax-M2. Please refer to our [MLX Deployment Guide](https://huggingface.co/MiniMaxAI/MiniMax-M2/blob/main/docs/mlx_deploy_guide.md) for more details.
|
| 181 |
|
| 182 |
+
### Transformers
|
| 183 |
+
|
| 184 |
+
We recommend using [Transformers](https://github.com/huggingface/transformers) to serve MiniMax-M2. Please refer to our [Transformers Deployment Guide](https://huggingface.co/MiniMaxAI/MiniMax-M2/blob/main/docs/transformers_deploy_guide.md) for more details.
|
| 185 |
|
| 186 |
### Inference Parameters
|
| 187 |
We recommend using the following parameters for best performance: `temperature=1.0`, `top_p = 0.95`, `top_k = 40`.
|
docs/transformers_deploy_guide.md
ADDED
|
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# MiniMax M2 Model Transformers Deployment Guide
|
| 2 |
+
|
| 3 |
+
[English Version](./tramsformers_deploy_guide.md) | [Chinese Version](./tramsformers_deploy_guide_cn.md)
|
| 4 |
+
|
| 5 |
+
## Applicable Models
|
| 6 |
+
|
| 7 |
+
This document applies to the following models. You only need to change the model name during deployment.
|
| 8 |
+
|
| 9 |
+
- [MiniMaxAI/MiniMax-M2](https://huggingface.co/MiniMaxAI/MiniMax-M2)
|
| 10 |
+
|
| 11 |
+
The deployment process is illustrated below using MiniMax-M2 as an example.
|
| 12 |
+
|
| 13 |
+
## System Requirements
|
| 14 |
+
|
| 15 |
+
- OS: Linux
|
| 16 |
+
|
| 17 |
+
- Python: 3.9 - 3.12
|
| 18 |
+
|
| 19 |
+
- Transformers: 4.57.1
|
| 20 |
+
|
| 21 |
+
- GPU:
|
| 22 |
+
|
| 23 |
+
- compute capability 7.0 or higher
|
| 24 |
+
|
| 25 |
+
- Memory requirements: 220 GB for weights.
|
| 26 |
+
|
| 27 |
+
## Deployment with Python
|
| 28 |
+
|
| 29 |
+
It is recommended to use a virtual environment (such as **venv**, **conda**, or **uv**) to avoid dependency conflicts.
|
| 30 |
+
|
| 31 |
+
We recommend installing Transformers in a fresh Python environment:
|
| 32 |
+
|
| 33 |
+
```bash
|
| 34 |
+
uv pip install transformers torch accelerate --torch-backend=auto
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
Run the following Python script to run the model. Transformers will automatically download and cache the MiniMax-M2 model from Hugging Face.
|
| 38 |
+
|
| 39 |
+
```python
|
| 40 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
|
| 41 |
+
import torch
|
| 42 |
+
|
| 43 |
+
MODEL_PATH = "MiniMaxAI/MiniMax-M2"
|
| 44 |
+
|
| 45 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 46 |
+
MODEL_PATH,
|
| 47 |
+
device_map="auto",
|
| 48 |
+
trust_remote_code=True,
|
| 49 |
+
)
|
| 50 |
+
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
|
| 51 |
+
|
| 52 |
+
messages = [
|
| 53 |
+
{"role": "user", "content": [{"type": "text", "text": "What is your favourite condiment?"}]},
|
| 54 |
+
{"role": "assistant", "content": [{"type": "text", "text": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}]},
|
| 55 |
+
{"role": "user", "content": [{"type": "text", "text": "Do you have mayonnaise recipes?"}]}
|
| 56 |
+
]
|
| 57 |
+
|
| 58 |
+
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to("cuda")
|
| 59 |
+
|
| 60 |
+
generated_ids = model.generate(model_inputs, max_new_tokens=100, generation_config=generation_config)
|
| 61 |
+
|
| 62 |
+
response = tokenizer.batch_decode(generated_ids)[0]
|
| 63 |
+
|
| 64 |
+
print(response)
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
## Common Issues
|
| 68 |
+
|
| 69 |
+
### Hugging Face Network Issues
|
| 70 |
+
|
| 71 |
+
If you encounter network issues, you can set up a proxy before pulling the model.
|
| 72 |
+
|
| 73 |
+
```bash
|
| 74 |
+
export HF_ENDPOINT=https://hf-mirror.com
|
| 75 |
+
```
|
| 76 |
+
|
| 77 |
+
### MiniMax-M2 model is not currently supported
|
| 78 |
+
|
| 79 |
+
Please check that trust_remote_code=True.
|
| 80 |
+
|
| 81 |
+
## Getting Support
|
| 82 |
+
|
| 83 |
+
If you encounter any issues while deploying the MiniMax model:
|
| 84 |
+
|
| 85 |
+
- Contact our technical support team through official channels such as email at [model@minimax.io](mailto:model@minimax.io)
|
| 86 |
+
|
| 87 |
+
- Submit an issue on our [GitHub](https://github.com/MiniMax-AI) repository
|
| 88 |
+
|
| 89 |
+
We continuously optimize the deployment experience for our models. Feedback is welcome!
|
| 90 |
+
|
docs/transformers_deploy_guide_cn.md
ADDED
|
@@ -0,0 +1,91 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# MiniMax M2 模型 Transformers 部署指南
|
| 2 |
+
|
| 3 |
+
[英文版](./transformers_deploy_guide.md) | [中文版](./transformers_deploy_guide_cn.md)
|
| 4 |
+
|
| 5 |
+
## 本文档适用模型
|
| 6 |
+
|
| 7 |
+
本文档适用以下模型,只需在部署时修改模型名称即可。
|
| 8 |
+
|
| 9 |
+
- [MiniMaxAI/MiniMax-M2](https://huggingface.co/MiniMaxAI/MiniMax-M2)
|
| 10 |
+
|
| 11 |
+
以下以 MiniMax-M2 为例说明部署流程。
|
| 12 |
+
|
| 13 |
+
## 环境要求
|
| 14 |
+
|
| 15 |
+
- OS:Linux
|
| 16 |
+
|
| 17 |
+
- Python:3.9 - 3.12
|
| 18 |
+
|
| 19 |
+
- Transformers: 4.57.1
|
| 20 |
+
|
| 21 |
+
- GPU:
|
| 22 |
+
|
| 23 |
+
- compute capability 7.0 or higher
|
| 24 |
+
|
| 25 |
+
- 显存需求:权重需要 220 GB
|
| 26 |
+
|
| 27 |
+
## 使用 Python 部署
|
| 28 |
+
|
| 29 |
+
建议使用虚拟环境(如 **venv**、**conda**、**uv**)以避免依赖冲突。
|
| 30 |
+
|
| 31 |
+
建议在全新的 Python 环境中安装 Transformers:
|
| 32 |
+
|
| 33 |
+
```bash
|
| 34 |
+
uv pip install transformers torch accelerate --torch-backend=auto
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
运行如下 Python 命令运行模型,Transformers 会自动从 Huggingface 下载并缓存 MiniMax-M2 模型。
|
| 38 |
+
|
| 39 |
+
```python
|
| 40 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
|
| 41 |
+
import torch
|
| 42 |
+
|
| 43 |
+
MODEL_PATH = "MiniMaxAI/MiniMax-M2"
|
| 44 |
+
|
| 45 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 46 |
+
MODEL_PATH,
|
| 47 |
+
device_map="auto",
|
| 48 |
+
trust_remote_code=True,
|
| 49 |
+
)
|
| 50 |
+
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
|
| 51 |
+
|
| 52 |
+
messages = [
|
| 53 |
+
{"role": "user", "content": [{"type": "text", "text": "What is your favourite condiment?"}]},
|
| 54 |
+
{"role": "assistant", "content": [{"type": "text", "text": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}]},
|
| 55 |
+
{"role": "user", "content": [{"type": "text", "text": "Do you have mayonnaise recipes?"}]}
|
| 56 |
+
]
|
| 57 |
+
|
| 58 |
+
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to("cuda")
|
| 59 |
+
|
| 60 |
+
generated_ids = model.generate(model_inputs, max_new_tokens=100, generation_config=generation_config)
|
| 61 |
+
|
| 62 |
+
response = tokenizer.batch_decode(generated_ids)[0]
|
| 63 |
+
|
| 64 |
+
print(response)
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
## 常见问题
|
| 68 |
+
|
| 69 |
+
### Huggingface 网络问题
|
| 70 |
+
|
| 71 |
+
如果遇到网络问题,可以设置代理后再进行拉取。
|
| 72 |
+
|
| 73 |
+
```bash
|
| 74 |
+
export HF_ENDPOINT=https://hf-mirror.com
|
| 75 |
+
```
|
| 76 |
+
|
| 77 |
+
### MiniMax-M2 model is not currently supported
|
| 78 |
+
|
| 79 |
+
请确认开启 trust_remote_code=True。
|
| 80 |
+
|
| 81 |
+
## 获取支持
|
| 82 |
+
|
| 83 |
+
如果在部署 MiniMax 模型过程中遇到任何问题:
|
| 84 |
+
|
| 85 |
+
- 通过邮箱 [model@minimax.io](mailto:model@minimax.io) 等官方渠道联系我们的技术支持团队
|
| 86 |
+
|
| 87 |
+
- 在我们的 [GitHub](https://github.com/MiniMax-AI) 仓库提交 Issue
|
| 88 |
+
|
| 89 |
+
- 通过我们的 [官方企业微信交流群](https://github.com/MiniMax-AI/MiniMax-AI.github.io/blob/main/images/wechat-qrcode.jpeg) 反馈
|
| 90 |
+
|
| 91 |
+
我们会持续优化模型的部署体验,欢迎反馈!
|