Text Generation
Transformers
Safetensors
minimax_m2
conversational
custom_code
fp8
zhaochenyang20 commited on
Commit
d1a4154
·
verified ·
1 Parent(s): 58b625a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -166,13 +166,14 @@ We look forward to your feedback and to collaborating with developers and resear
166
 
167
  Download the model from HuggingFace repository: https://huggingface.co/MiniMaxAI/MiniMax-M2
168
 
 
 
 
 
169
  ### vLLM
170
 
171
  We recommend using [vLLM](https://docs.vllm.ai/en/stable/) to serve MiniMax-M2. vLLM provides efficient day-0 support of MiniMax-M2 model, check https://docs.vllm.ai/projects/recipes/en/latest/MiniMax/MiniMax-M2.html for latest deployment guide. We also provide our [vLLM Deployment Guide](https://huggingface.co/MiniMaxAI/MiniMax-M2/blob/main/docs/vllm_deploy_guide.md).
172
 
173
- ### SGLang
174
- We recommend using [SGLang](https://docs.sglang.ai/) to serve MiniMax-M2. SGLang provides solid day-0 support for MiniMax-M2 model. Please refer to our [SGLang Deployment Guide](https://huggingface.co/MiniMaxAI/MiniMax-M2/blob/main/docs/sglang_deploy_guide.md) for more details, and thanks so much for our collaboration with the SGLang team.
175
-
176
  ### Inference Parameters
177
  We recommend using the following parameters for best performance: `temperature=1.0`, `top_p = 0.95`, `top_k = 20`.
178
 
 
166
 
167
  Download the model from HuggingFace repository: https://huggingface.co/MiniMaxAI/MiniMax-M2
168
 
169
+ ### SGLang
170
+
171
+ We recommend using [SGLang](https://docs.sglang.ai/) to serve MiniMax-M2. SGLang provides solid day-0 support for MiniMax-M2 model. Please refer to our [SGLang Deployment Guide](https://huggingface.co/MiniMaxAI/MiniMax-M2/blob/main/docs/sglang_deploy_guide.md) for more details, and thanks so much for our collaboration with the SGLang team.
172
+
173
  ### vLLM
174
 
175
  We recommend using [vLLM](https://docs.vllm.ai/en/stable/) to serve MiniMax-M2. vLLM provides efficient day-0 support of MiniMax-M2 model, check https://docs.vllm.ai/projects/recipes/en/latest/MiniMax/MiniMax-M2.html for latest deployment guide. We also provide our [vLLM Deployment Guide](https://huggingface.co/MiniMaxAI/MiniMax-M2/blob/main/docs/vllm_deploy_guide.md).
176
 
 
 
 
177
  ### Inference Parameters
178
  We recommend using the following parameters for best performance: `temperature=1.0`, `top_p = 0.95`, `top_k = 20`.
179