For transformers versions v4.40.0 or newer, we suggest using
OLMo 1B HF
instead.
OLMo is a series of
O
pen
L
anguage
Mo
dels designed to enable the science of language models.
The OLMo models are trained on the
Dolma
dataset.
We release all code, checkpoints, logs (coming soon), and details involved in training these models.
Model Details
The core models released in this batch are the following:
We are releasing many checkpoints for these models, for every 1000 traing steps.
The naming convention is
step1000-tokens4B
.
In particular, we focus on four revisions of the 7B models:
All revisions/branches are listed in the file
revisions.txt
.
Or, you can access all the revisions for the models via the following code snippet:
from huggingface_hub import list_repo_refs
out = list_repo_refs("allenai/OLMo-1B")
branches = [b.name for b in out.branches]
A few revisions were lost due to an error, but the vast majority are present.
Model Description
Developed by:
Allen Institute for AI (AI2)
Supported by:
Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW
Model type:
a Transformer style autoregressive language model.
Language(s) (NLP):
English
License:
The code and model are released under Apache 2.0.
Contact:
Technical inquiries:
olmo at allenai dot org
. Press:
press at allenai dot org
Date cutoff:
Feb./March 2023 based on Dolma dataset version.
Quickly get inference running with the following required installation:
pip install ai2-olmo
Now, proceed as usual with HuggingFace:
from hf_olmo import OLMoForCausalLM, OLMoTokenizerFast
olmo = OLMoForCausalLM.from_pretrained("allenai/OLMo-1B")
tokenizer = OLMoTokenizerFast.from_pretrained("allenai/OLMo-1B")
message = ["Language modeling is"]
inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
# optional verifying cuda# inputs = {k: v.to('cuda') for k,v in inputs.items()}# olmo = olmo.to('cuda')
response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
>> 'Language modeling is the first step to build natural language generation...'
You can make this slightly faster by quantizing the model, e.g.
AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B", torch_dtype=torch.float16, load_in_8bit=True)
(requires
bitsandbytes
).
The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as
inputs.input_ids.to('cuda')
to avoid potential issues.
Note, you may see the following error if
ai2-olmo
is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer.
raise ImportError(
ImportError: This modeling file requires the following packages that were not found in your environment: hf_olmo. Run `pip install hf_olmo`
Fine-tuning
Model fine-tuning can be done from the final checkpoint (the
main
revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.
*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.
Model Details
Data
For training data details, please see the
Dolma
documentation.
Architecture
OLMo 7B architecture with peer models for comparison.
OLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.
A summary of the environmental impact. Further details are available in the paper.
Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.
Citation
BibTeX:
@article{Groeneveld2023OLMo,
title={OLMo: Accelerating the Science of Language Models},
author={Groeneveld, Dirk and Beltagy, Iz and Walsh, Pete and Bhagia, Akshita and Kinney, Rodney and Tafjord, Oyvind and Jha, Ananya Harsh and Ivison, Hamish and Magnusson, Ian and Wang, Yizhong and Arora, Shane and Atkinson, David and Authur, Russell and Chandu, Khyathi and Cohan, Arman and Dumas, Jennifer and Elazar, Yanai and Gu, Yuling and Hessel, Jack and Khot, Tushar and Merrill, William and Morrison, Jacob and Muennighoff, Niklas and Naik, Aakanksha and Nam, Crystal and Peters, Matthew E. and Pyatkin, Valentina and Ravichander, Abhilasha and Schwenk, Dustin and Shah, Saurabh and Smith, Will and Subramani, Nishant and Wortsman, Mitchell and Dasigi, Pradeep and Lambert, Nathan and Richardson, Kyle and Dodge, Jesse and Lo, Kyle and Soldaini, Luca and Smith, Noah A. and Hajishirzi, Hannaneh},
journal={Preprint},
year={2024}
}
APA:
Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.
Model Card Contact
For errors in this model card, contact Nathan or Akshita,
{nathanl, akshitab} at allenai dot org
.
Runs of allenai OLMo-1B on huggingface.co
1.7K
Total runs
-155
24-hour runs
-145
3-day runs
-136
7-day runs
-110
30-day runs
More Information About OLMo-1B huggingface.co Model
OLMo-1B huggingface.co is an AI model on huggingface.co that provides OLMo-1B's model effect (), which can be used instantly with this allenai OLMo-1B model. huggingface.co supports a free trial of the OLMo-1B model, and also provides paid use of the OLMo-1B. Support call OLMo-1B model through api, including Node.js, Python, http.
OLMo-1B huggingface.co is an online trial and call api platform, which integrates OLMo-1B's modeling effects, including api services, and provides a free online trial of OLMo-1B, you can try OLMo-1B online for free by clicking the link below.
allenai OLMo-1B online free url in huggingface.co:
OLMo-1B is an open source model from GitHub that offers a free installation service, and any user can find OLMo-1B on GitHub to install. At the same time, huggingface.co provides the effect of OLMo-1B install, users can directly use OLMo-1B installed effect in huggingface.co for debugging and trial. It also supports api for free installation.