Tulu is a series of language models that are trained to act as helpful assistants.
Tulu 2 7B is a fine-tuned version of Llama 2 that was trained on a mix of publicly available, synthetic and human datasets.
Model type:
A model belonging to a suite of instruction and RLHF tuned chat models on a mix of publicly available, synthetic and human-created datasets.
Model Family:
Other models and the dataset are found in the
Tulu V2 collection
.
Performance
Model
Size
Alignment
MT-Bench (score)
AlpacaEval (win rate %)
Tulu-v2-7b
🐪
7B
SFT
6.30
73.9
Tulu-v2-dpo-7b
🐪
7B
DPO
6.29
85.1
Tulu-v2-13b
🐪
13B
SFT
6.70
78.9
Tulu-v2-dpo-13b
🐪
13B
DPO
7.00
89.5
Tulu-v2-70b
🐪
70B
SFT
7.49
86.6
Tulu-v2-dpo-70b
🐪
70B
DPO
7.89
95.1
Input Format
The model is trained to use the following format (note the newlines):
<|user|>
Your message here!
<|assistant|>
For best results, format all inputs in this manner.
Make sure to include a newline after
<|assistant|>
, this can affect generation quality quite a bit.
Intended uses & limitations
The model was fine-tuned on a filtered and preprocessed of the
Tulu V2 mix dataset
, which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs.
Bias, Risks, and Limitations
The Tulu models have not been aligned to generate safe completions within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
It is also unknown what the size and composition of the corpus was used to train the base Llama 2 models, however it is likely to have included a mix of Web data and technical sources like books and code. See the
Falcon 180B model card
for an example of this.
Training hyperparameters
The following hyperparameters were used during DPO training:
learning_rate: 2e-5
total_train_batch_size: 128
optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
lr_scheduler_type: linear
lr_scheduler_warmup_ratio: 0.03
num_epochs: 2.0
Citation
If you find Tulu 2 is useful in your work, please cite it with:
@misc{ivison2023camels,
title={Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2},
author={Hamish Ivison and Yizhong Wang and Valentina Pyatkin and Nathan Lambert and Matthew Peters and Pradeep Dasigi and Joel Jang and David Wadden and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2311.10702},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
More Information About tulu-2-7b huggingface.co Model
tulu-2-7b huggingface.co
tulu-2-7b huggingface.co is an AI model on huggingface.co that provides tulu-2-7b's model effect (), which can be used instantly with this allenai tulu-2-7b model. huggingface.co supports a free trial of the tulu-2-7b model, and also provides paid use of the tulu-2-7b. Support call tulu-2-7b model through api, including Node.js, Python, http.
tulu-2-7b huggingface.co is an online trial and call api platform, which integrates tulu-2-7b's modeling effects, including api services, and provides a free online trial of tulu-2-7b, you can try tulu-2-7b online for free by clicking the link below.
allenai tulu-2-7b online free url in huggingface.co:
tulu-2-7b is an open source model from GitHub that offers a free installation service, and any user can find tulu-2-7b on GitHub to install. At the same time, huggingface.co provides the effect of tulu-2-7b install, users can directly use tulu-2-7b installed effect in huggingface.co for debugging and trial. It also supports api for free installation.