from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-mpt-7b-chat"# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("mosaicml/mpt-7b-chat")
)
outputs = model.generate(
text=["How do you call a fast Flan-ingo?", "User: How are you doing? Bot:"],
max_length=64,
include_prompt_in_result=False
)
print(outputs)
Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
I want you to come up with a tweet based on this summary of the article:
"Introducing MPT-7B, the latest entry in our MosaicML Foundation Series.
MPT-7B is a transformer trained from scratch on IT tokens of text and code.
It is open source and it matches the quality of LLaMA-7B.
MPT-7B was trained on the MosaicML platform in 9.5 days with zero human intervention at a cost of ~$200k.
Starting today, you can train, finetune, and deploy your own private MPT models, either starting from one of our checkpoints or training from scratch.
For inspiration, we are also releasing three finetuned models in addition to the base MPT-7B: MPT-7B-Instruct, MPT-7B-Chat, and MPT-7B-Story Writer-65k+, the last of which uses a context length of 65k tokens!
MPT-7B-Instruct
:
Here's my suggestion for a tweet:
"Exciting news! @MosaicML has launched #MPT_7B - a new standard for open-source LLMs. With training on 17 tokens of text & code, it matches the quality of LLaMA-7B. Now you can train, fine-tune, and deploy your own private MPT models via their platform. Check out the full blog post here:
https://bit.ly/3j6UJzZ
"
SamIAm85
:
Is there anything you want to say to MosaicML about this exciting news?
MPT-7B-Instruct
:
As an Al language model myself, I am very impressed with the advancements made by MosaicML in developing such high-quality open-source LLMs.
This new version of MPT-7B is truly impressive and I look forward to seeing what innovative applications developers will create using these powerful tools.
Thank you for your hard work and dedication to advancing Al research and development.
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-chat',
trust_remote_code=True
)
Note: This model requires that
trust_remote_code=True
be passed to the
from_pretrained
method.
This is because we use a custom
MPT
model architecture that is not yet part of the Hugging Face
transformers
package.
MPT
includes options for many training efficiency features such as
FlashAttention
,
ALiBi
,
QK LayerNorm
, and more.
To use the optimized
triton implementation
of FlashAttention, you can load the model with
attn_impl='triton'
and move the model to
bfloat16
:
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
MPT-7B-Chat can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-Chat was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Acknowledgements
This model was finetuned by Sam Havens and the MosaicML NLP team
Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
Please cite this model using the following format:
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source,
ly Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
Runs of michaelfeil ct2fast-mpt-7b-chat on huggingface.co
5
Total runs
0
24-hour runs
0
3-day runs
0
7-day runs
3
30-day runs
More Information About ct2fast-mpt-7b-chat huggingface.co Model
ct2fast-mpt-7b-chat huggingface.co is an AI model on huggingface.co that provides ct2fast-mpt-7b-chat's model effect (), which can be used instantly with this michaelfeil ct2fast-mpt-7b-chat model. huggingface.co supports a free trial of the ct2fast-mpt-7b-chat model, and also provides paid use of the ct2fast-mpt-7b-chat. Support call ct2fast-mpt-7b-chat model through api, including Node.js, Python, http.
ct2fast-mpt-7b-chat huggingface.co is an online trial and call api platform, which integrates ct2fast-mpt-7b-chat's modeling effects, including api services, and provides a free online trial of ct2fast-mpt-7b-chat, you can try ct2fast-mpt-7b-chat online for free by clicking the link below.
michaelfeil ct2fast-mpt-7b-chat online free url in huggingface.co:
ct2fast-mpt-7b-chat is an open source model from GitHub that offers a free installation service, and any user can find ct2fast-mpt-7b-chat on GitHub to install. At the same time, huggingface.co provides the effect of ct2fast-mpt-7b-chat install, users can directly use ct2fast-mpt-7b-chat installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
ct2fast-mpt-7b-chat install url in huggingface.co: