MaziyarPanahi / Mistral-7B-Instruct-v0.3

huggingface.co
Total runs: 7.3K
24-hour runs: 103
7-day runs: -30
30-day runs: 1.7K
Model's Last Updated: May 31 2024
text-generation

Introduction of Mistral-7B-Instruct-v0.3

Model Details of Mistral-7B-Instruct-v0.3

Model Card for Mistral-7B-Instruct-v0.3

The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.

Mistral-7B-v0.3 has the following changes compared to Mistral-7B-v0.2

  • Extended vocabulary to 32768
  • Supports v3 Tokenizer
  • Supports function calling
Installation

It is recommended to use mistralai/Mistral-7B-Instruct-v0.3 with mistral-inference . For HF transformers code snippets, please keep scrolling.

pip install mistral_inference
Download
from huggingface_hub import snapshot_download
from pathlib import Path

mistral_models_path = Path.home().joinpath('mistral_models', '7B-Instruct-v0.3')
mistral_models_path.mkdir(parents=True, exist_ok=True)

snapshot_download(repo_id="mistralai/Mistral-7B-Instruct-v0.3", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
Chat

After installing mistral_inference , a mistral-chat CLI command should be available in your environment. You can chat with the model using

mistral-chat $HOME/mistral_models/7B-Instruct-v0.3 --instruct --max_tokens 256
Instruct following
from mistral_inference.model import Transformer
from mistral_inference.generate import generate

from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest


tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)

completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])

tokens = tokenizer.encode_chat_completion(completion_request).tokens

out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])

print(result)
Function calling
from mistral_common.protocol.instruct.tool_calls import Function, Tool
from mistral_inference.model import Transformer
from mistral_inference.generate import generate

from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest


tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)

completion_request = ChatCompletionRequest(
    tools=[
        Tool(
            function=Function(
                name="get_current_weather",
                description="Get the current weather",
                parameters={
                    "type": "object",
                    "properties": {
                        "location": {
                            "type": "string",
                            "description": "The city and state, e.g. San Francisco, CA",
                        },
                        "format": {
                            "type": "string",
                            "enum": ["celsius", "fahrenheit"],
                            "description": "The temperature unit to use. Infer this from the users location.",
                        },
                    },
                    "required": ["location", "format"],
                },
            )
        )
    ],
    messages=[
        UserMessage(content="What's the weather like today in Paris?"),
        ],
)

tokens = tokenizer.encode_chat_completion(completion_request).tokens

out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])

print(result)
Generate with transformers

If you want to use Hugging Face transformers to generate text, you can do something like this.

from transformers import pipeline

messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]
chatbot = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.3")
chatbot(messages)
Limitations

The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.

The Mistral AI Team

Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 65.21
AI2 Reasoning Challenge (25-Shot) 63.91
HellaSwag (10-Shot) 84.82
MMLU (5-Shot) 62.58
TruthfulQA (0-shot) 59.45
Winogrande (5-shot) 78.37
GSM8k (5-shot) 42.15

Runs of MaziyarPanahi Mistral-7B-Instruct-v0.3 on huggingface.co

7.3K
Total runs
103
24-hour runs
440
3-day runs
-30
7-day runs
1.7K
30-day runs

More Information About Mistral-7B-Instruct-v0.3 huggingface.co Model

More Mistral-7B-Instruct-v0.3 license Visit here:

https://choosealicense.com/licenses/apache-2.0

Mistral-7B-Instruct-v0.3 huggingface.co

Mistral-7B-Instruct-v0.3 huggingface.co is an AI model on huggingface.co that provides Mistral-7B-Instruct-v0.3's model effect (), which can be used instantly with this MaziyarPanahi Mistral-7B-Instruct-v0.3 model. huggingface.co supports a free trial of the Mistral-7B-Instruct-v0.3 model, and also provides paid use of the Mistral-7B-Instruct-v0.3. Support call Mistral-7B-Instruct-v0.3 model through api, including Node.js, Python, http.

Mistral-7B-Instruct-v0.3 huggingface.co Url

https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-v0.3

MaziyarPanahi Mistral-7B-Instruct-v0.3 online free

Mistral-7B-Instruct-v0.3 huggingface.co is an online trial and call api platform, which integrates Mistral-7B-Instruct-v0.3's modeling effects, including api services, and provides a free online trial of Mistral-7B-Instruct-v0.3, you can try Mistral-7B-Instruct-v0.3 online for free by clicking the link below.

MaziyarPanahi Mistral-7B-Instruct-v0.3 online free url in huggingface.co:

https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-v0.3

Mistral-7B-Instruct-v0.3 install

Mistral-7B-Instruct-v0.3 is an open source model from GitHub that offers a free installation service, and any user can find Mistral-7B-Instruct-v0.3 on GitHub to install. At the same time, huggingface.co provides the effect of Mistral-7B-Instruct-v0.3 install, users can directly use Mistral-7B-Instruct-v0.3 installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

Mistral-7B-Instruct-v0.3 install url in huggingface.co:

https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-v0.3

Url of Mistral-7B-Instruct-v0.3

Mistral-7B-Instruct-v0.3 huggingface.co Url

Provider of Mistral-7B-Instruct-v0.3 huggingface.co

MaziyarPanahi
ORGANIZATIONS

Other API from MaziyarPanahi