from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-RedPajama-INCITE-Chat-7B-v0.1"# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1")
)
outputs = model.generate(
text=["How do you call a fast Flan-ingo?", "User: How are you doing? Bot:"],
)
print(outputs)
Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
Original description
tags:
ctranslate2
int8
float16
RedPajama-INCITE-Chat-7B-v0.1
RedPajama-INCITE-Chat-7B-v0.1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
It is fine-tuned on OASST1 and Dolly2 to enhance chatting ability.
Model Description
: A 6.9B parameter pretrained language model.
Quick Start
Please note that the model requires
transformers
version >= 4.25.1.
To prompt the chat model, use the following format:
<human>: [Instruction]
<bot>:
GPU Inference
This requires a GPU with 16GB memory.
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'# check transformers versionassert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1", torch_dtype=torch.float16)
model = model.to('cuda:0')
# infer
prompt = "<human>: Who is Alan Turing?\n<bot>:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""Alan Mathison Turing (23 June 1912 7 June 1954) was an English computer scientist, mathematician, logician, cryptanalyst, philosopher, mathematician, and theoretical biologist."""
GPU Inference in Int8
This requires a GPU with 12GB memory.
To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command:
pip install accelerate
pip install bitsandbytes
Then you can run inference with int8 as follows:
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'# check transformers versionassert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True)
# infer
prompt = "<human>: Who is Alan Turing?\n<bot>:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""Alan Mathison Turing (23 June 1912 – 7 June 1954) was an English computer scientist, mathematician, logician, cryptanalyst, philosopher, and theoretical biologist."""
CPU Inference
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'# check transformers versionassert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1", torch_dtype=torch.bfloat16)
# infer
prompt = "<human>: Who is Alan Turing?\n<bot>:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""Alan Mathison Turing, OBE, FRS, (23 June 1912 – 7 June 1954) was an English computer scientist, mathematician, logician, cryptanalyst, philosopher, and theoretical biologist."""
Please note that since
LayerNormKernelImpl
is not implemented in fp16 for CPU, we use
bfloat16
for CPU inference.
Uses
Direct Use
Excluded uses are described below.
Misuse, Malicious Use, and Out-of-Scope Use
It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner.
Out-of-Scope Use
RedPajama-INCITE-Chat-7B-v0.1
is a language model and may not perform well for other use cases outside of its intended scope.
For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
It is important to consider the limitations of the model and to only use it for its intended purpose.
Misuse and Malicious Use
RedPajama-INCITE-Chat-7B-v0.1
is designed for language modeling.
Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project.
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
Generating fake news, misinformation, or propaganda
Promoting hate speech, discrimination, or violence against individuals or groups
Impersonating individuals or organizations without their consent
Engaging in cyberbullying or harassment
Defamatory content
Spamming or scamming
Sharing confidential or sensitive information without proper authorization
Violating the terms of use of the model or the data used to train it
Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming
Limitations
RedPajama-INCITE-Chat-7B-v0.1
, like other language models, has limitations that should be taken into consideration.
For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
ct2fast-RedPajama-INCITE-Chat-7B-v0.1 huggingface.co is an AI model on huggingface.co that provides ct2fast-RedPajama-INCITE-Chat-7B-v0.1's model effect (), which can be used instantly with this michaelfeil ct2fast-RedPajama-INCITE-Chat-7B-v0.1 model. huggingface.co supports a free trial of the ct2fast-RedPajama-INCITE-Chat-7B-v0.1 model, and also provides paid use of the ct2fast-RedPajama-INCITE-Chat-7B-v0.1. Support call ct2fast-RedPajama-INCITE-Chat-7B-v0.1 model through api, including Node.js, Python, http.
ct2fast-RedPajama-INCITE-Chat-7B-v0.1 huggingface.co is an online trial and call api platform, which integrates ct2fast-RedPajama-INCITE-Chat-7B-v0.1's modeling effects, including api services, and provides a free online trial of ct2fast-RedPajama-INCITE-Chat-7B-v0.1, you can try ct2fast-RedPajama-INCITE-Chat-7B-v0.1 online for free by clicking the link below.
michaelfeil ct2fast-RedPajama-INCITE-Chat-7B-v0.1 online free url in huggingface.co:
ct2fast-RedPajama-INCITE-Chat-7B-v0.1 is an open source model from GitHub that offers a free installation service, and any user can find ct2fast-RedPajama-INCITE-Chat-7B-v0.1 on GitHub to install. At the same time, huggingface.co provides the effect of ct2fast-RedPajama-INCITE-Chat-7B-v0.1 install, users can directly use ct2fast-RedPajama-INCITE-Chat-7B-v0.1 installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
ct2fast-RedPajama-INCITE-Chat-7B-v0.1 install url in huggingface.co: