| 
							 | 
						--- | 
					
					
						
						| 
							 | 
						library_name: transformers | 
					
					
						
						| 
							 | 
						license: apache-2.0 | 
					
					
						
						| 
							 | 
						tags: | 
					
					
						
						| 
							 | 
						- finetuned | 
					
					
						
						| 
							 | 
						- mistral-common | 
					
					
						
						| 
							 | 
						base_model: mistralai/Mistral-7B-v0.1 | 
					
					
						
						| 
							 | 
						inference: false | 
					
					
						
						| 
							 | 
						widget: | 
					
					
						
						| 
							 | 
						- messages: | 
					
					
						
						| 
							 | 
						  - role: user | 
					
					
						
						| 
							 | 
						    content: What is your favorite condiment? | 
					
					
						
						| 
							 | 
						extra_gated_description: >- | 
					
					
						
						| 
							 | 
						  If you want to learn more about how we process your personal data, please read | 
					
					
						
						| 
							 | 
						  our <a href="https://mistral.ai/terms/">Privacy Policy</a>. | 
					
					
						
						| 
							 | 
						--- | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						# Model Card for Mistral-7B-Instruct-v0.1 | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						## Encode and Decode with `mistral_common` | 
					
					
						
						| 
							 | 
						             | 
					
					
						
						| 
							 | 
						```py | 
					
					
						
						| 
							 | 
						from mistral_common.tokens.tokenizers.mistral import MistralTokenizer | 
					
					
						
						| 
							 | 
						from mistral_common.protocol.instruct.messages import UserMessage | 
					
					
						
						| 
							 | 
						from mistral_common.protocol.instruct.request import ChatCompletionRequest | 
					
					
						
						| 
							 | 
						  | 
					
					
						
						| 
							 | 
						mistral_models_path = "MISTRAL_MODELS_PATH" | 
					
					
						
						| 
							 | 
						  | 
					
					
						
						| 
							 | 
						tokenizer = MistralTokenizer.v1() | 
					
					
						
						| 
							 | 
						  | 
					
					
						
						| 
							 | 
						completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")]) | 
					
					
						
						| 
							 | 
						  | 
					
					
						
						| 
							 | 
						tokens = tokenizer.encode_chat_completion(completion_request).tokens | 
					
					
						
						| 
							 | 
						``` | 
					
					
						
						| 
							 | 
						  | 
					
					
						
						| 
							 | 
						## Inference with `mistral_inference` | 
					
					
						
						| 
							 | 
						  | 
					
					
						
						| 
							 | 
						 ```py | 
					
					
						
						| 
							 | 
						from mistral_inference.transformer import Transformer | 
					
					
						
						| 
							 | 
						from mistral_inference.generate import generate | 
					
					
						
						| 
							 | 
						  | 
					
					
						
						| 
							 | 
						model = Transformer.from_folder(mistral_models_path) | 
					
					
						
						| 
							 | 
						out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id) | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						result = tokenizer.decode(out_tokens[0]) | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						print(result) | 
					
					
						
						| 
							 | 
						``` | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						## Inference with hugging face `transformers` | 
					
					
						
						| 
							 | 
						  | 
					
					
						
						| 
							 | 
						```py | 
					
					
						
						| 
							 | 
						from transformers import AutoModelForCausalLM | 
					
					
						
						| 
							 | 
						  | 
					
					
						
						| 
							 | 
						model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") | 
					
					
						
						| 
							 | 
						model.to("cuda") | 
					
					
						
						| 
							 | 
						  | 
					
					
						
						| 
							 | 
						generated_ids = model.generate(tokens, max_new_tokens=1000, do_sample=True) | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						# decode with mistral tokenizer | 
					
					
						
						| 
							 | 
						result = tokenizer.decode(generated_ids[0].tolist()) | 
					
					
						
						| 
							 | 
						print(result) | 
					
					
						
						| 
							 | 
						``` | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						> [!TIP] | 
					
					
						
						| 
							 | 
						> PRs to correct the `transformers` tokenizer so that it gives 1-to-1 the same results as the `mistral_common` reference implementation are very welcome! | 
					
					
						
						| 
							 | 
						             | 
					
					
						
						| 
							 | 
						--- | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets. | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/). | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						## Instruction format | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id. | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						E.g. | 
					
					
						
						| 
							 | 
						``` | 
					
					
						
						| 
							 | 
						text = "<s>[INST] What is your favourite condiment? [/INST]" | 
					
					
						
						| 
							 | 
						"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> " | 
					
					
						
						| 
							 | 
						"[INST] Do you have mayonnaise recipes? [/INST]" | 
					
					
						
						| 
							 | 
						``` | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method: | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						```python | 
					
					
						
						| 
							 | 
						from transformers import AutoModelForCausalLM, AutoTokenizer | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						device = "cuda" # the device to load the model onto | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") | 
					
					
						
						| 
							 | 
						tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						messages = [ | 
					
					
						
						| 
							 | 
						    {"role": "user", "content": "What is your favourite condiment?"}, | 
					
					
						
						| 
							 | 
						    {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, | 
					
					
						
						| 
							 | 
						    {"role": "user", "content": "Do you have mayonnaise recipes?"} | 
					
					
						
						| 
							 | 
						] | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						model_inputs = encodeds.to(device) | 
					
					
						
						| 
							 | 
						model.to(device) | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) | 
					
					
						
						| 
							 | 
						decoded = tokenizer.batch_decode(generated_ids) | 
					
					
						
						| 
							 | 
						print(decoded[0]) | 
					
					
						
						| 
							 | 
						``` | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						## Model Architecture | 
					
					
						
						| 
							 | 
						This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices: | 
					
					
						
						| 
							 | 
						- Grouped-Query Attention | 
					
					
						
						| 
							 | 
						- Sliding-Window Attention | 
					
					
						
						| 
							 | 
						- Byte-fallback BPE tokenizer | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						## Troubleshooting | 
					
					
						
						| 
							 | 
						- If you see the following error: | 
					
					
						
						| 
							 | 
						``` | 
					
					
						
						| 
							 | 
						Traceback (most recent call last): | 
					
					
						
						| 
							 | 
						File "", line 1, in | 
					
					
						
						| 
							 | 
						File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained | 
					
					
						
						| 
							 | 
						config, kwargs = AutoConfig.from_pretrained( | 
					
					
						
						| 
							 | 
						File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained | 
					
					
						
						| 
							 | 
						config_class = CONFIG_MAPPING[config_dict["model_type"]] | 
					
					
						
						| 
							 | 
						File "/transformers/models/auto/configuration_auto.py", line 723, in getitem | 
					
					
						
						| 
							 | 
						raise KeyError(key) | 
					
					
						
						| 
							 | 
						KeyError: 'mistral' | 
					
					
						
						| 
							 | 
						``` | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						Installing transformers from source should solve the issue | 
					
					
						
						| 
							 | 
						pip install git+https://github.com/huggingface/transformers | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						This should not be required after transformers-v4.33.4. | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						## Limitations | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.  | 
					
					
						
						| 
							 | 
						It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to | 
					
					
						
						| 
							 | 
						make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						## The Mistral AI Team | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. |