from mistral_inference.model import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
Function calling
from mistral_common.protocol.instruct.tool_calls import Function, Tool
from mistral_inference.model import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(
tools=[
Tool(
function=Function(
name="get_current_weather",
description="Get the current weather",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
)
)
],
messages=[
UserMessage(content="What's the weather like today in Paris?"),
],
)
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
Generate with
transformers
If you want to use Hugging Face
transformers
to generate text, you can do something like this.
from transformers import pipeline
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
chatbot = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.3")
chatbot(messages)
Function calling with
transformers
To use this example, you'll need
transformers
version 4.42.0 or higher. Please see the
function calling guide
in the
transformers
docs for more information.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "mistralai/Mistral-7B-Instruct-v0.3"
tokenizer = AutoTokenizer.from_pretrained(model_id)
defget_current_weather(location: str, format: str):
""" Get the current weather Args: location: The city and state, e.g. San Francisco, CA format: The temperature unit to use. Infer this from the users location. (choices: ["celsius", "fahrenheit"]) """pass
conversation = [{"role": "user", "content": "What's the weather like in Paris?"}]
tools = [get_current_weather]
# render the tool use prompt as a string:
tool_use_prompt = tokenizer.apply_chat_template(
conversation,
tools=tools,
tokenize=False,
add_generation_prompt=True,
)
inputs = tokenizer(tool_use_prompt, return_tensors="pt")
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
outputs = model.generate(**inputs, max_new_tokens=1000)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Note that, for reasons of space, this example does not show a complete cycle of calling a tool and adding the tool call and tool
results to the chat history so that the model can use them in its next generation. For a full tool calling example, please
see the
function calling guide
,
and note that Mistral
does
use tool call IDs, so these must be included in your tool calls and tool results. They should be
exactly 9 alphanumeric characters.
Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
Runs of mistralai Mistral-7B-Instruct-v0.3 on huggingface.co
1.8M
Total runs
-47.7K
24-hour runs
-283.9K
3-day runs
-538.1K
7-day runs
-1.9M
30-day runs
More Information About Mistral-7B-Instruct-v0.3 huggingface.co Model
Mistral-7B-Instruct-v0.3 huggingface.co is an AI model on huggingface.co that provides Mistral-7B-Instruct-v0.3's model effect (), which can be used instantly with this mistralai Mistral-7B-Instruct-v0.3 model. huggingface.co supports a free trial of the Mistral-7B-Instruct-v0.3 model, and also provides paid use of the Mistral-7B-Instruct-v0.3. Support call Mistral-7B-Instruct-v0.3 model through api, including Node.js, Python, http.
Mistral-7B-Instruct-v0.3 huggingface.co is an online trial and call api platform, which integrates Mistral-7B-Instruct-v0.3's modeling effects, including api services, and provides a free online trial of Mistral-7B-Instruct-v0.3, you can try Mistral-7B-Instruct-v0.3 online for free by clicking the link below.
mistralai Mistral-7B-Instruct-v0.3 online free url in huggingface.co:
Mistral-7B-Instruct-v0.3 is an open source model from GitHub that offers a free installation service, and any user can find Mistral-7B-Instruct-v0.3 on GitHub to install. At the same time, huggingface.co provides the effect of Mistral-7B-Instruct-v0.3 install, users can directly use Mistral-7B-Instruct-v0.3 installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
Mistral-7B-Instruct-v0.3 install url in huggingface.co: