We introduce OLMo 2, a new family of 7B and 13B models trained on up to 5T tokens. These models are on par with or better than equivalently-sized fully-open models, and competitive with open-weight models from Meta and Mistral on English academic benchmarks.
OLMo is a series of
O
pen
L
anguage
Mo
dels designed to enable the science of language models.
These models are trained on the Dolma dataset. We are releasing all code, checkpoints, logs (coming soon), and associated training details.
The core models released in this batch include the following:
The quantized model is more sensitive to data types and CUDA operations. To avoid potential issues, it's recommended to pass the inputs directly to CUDA using:
inputs.input_ids.to('cuda')
We have released checkpoints for these models. For pretraining, the naming convention is
stepXXX-tokensYYYB
. For checkpoints with ingredients of the soup, the naming convention is
stage2-ingredientN-stepXXX-tokensYYYB
To load a specific model revision with HuggingFace, simply add the argument
revision
:
Or, you can access all the revisions for the models via the following code snippet:
from huggingface_hub import list_repo_refs
out = list_repo_refs("allenai/OLMo-2-1124-13B")
branches = [b.name for b in out.branches]
Fine-tuning
Model fine-tuning can be done from the final checkpoint (the
main
revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.
Mix composition: 50% high-quality data + academic/Q&A/instruction/math content
Model Merging
7B Model: 3 versions trained on 50B mix, merged via model souping
13B Model: 3 versions on 100B mix + 1 version on 300B mix, merged for final checkpoint
Bias, Risks, and Limitations
Like any base language model or fine-tuned model without safety filtering, these models can easily be prompted by users to generate harmful and sensitive content. Such content may also be produced unintentionally, especially in cases involving bias, so we recommend that users consider the risks when applying this technology. Additionally, many statements from OLMo or any LLM are often inaccurate, so facts should be verified.
License and use
OLMo 2 is licensed under the Apache 2.0 license.
OLMo 2 is intended for research and educational use.
For more information, please see our
Responsible Use Guidelines
.
Citation
A technical manuscript is forthcoming!
Model Card Contact
For errors in this model card, contact
olmo@allenai.org
.
Runs of allenai OLMo-2-1124-13B on huggingface.co
12.4K
Total runs
0
24-hour runs
6
3-day runs
-6
7-day runs
8.0K
30-day runs
More Information About OLMo-2-1124-13B huggingface.co Model
OLMo-2-1124-13B huggingface.co is an AI model on huggingface.co that provides OLMo-2-1124-13B's model effect (), which can be used instantly with this allenai OLMo-2-1124-13B model. huggingface.co supports a free trial of the OLMo-2-1124-13B model, and also provides paid use of the OLMo-2-1124-13B. Support call OLMo-2-1124-13B model through api, including Node.js, Python, http.
OLMo-2-1124-13B huggingface.co is an online trial and call api platform, which integrates OLMo-2-1124-13B's modeling effects, including api services, and provides a free online trial of OLMo-2-1124-13B, you can try OLMo-2-1124-13B online for free by clicking the link below.
allenai OLMo-2-1124-13B online free url in huggingface.co:
OLMo-2-1124-13B is an open source model from GitHub that offers a free installation service, and any user can find OLMo-2-1124-13B on GitHub to install. At the same time, huggingface.co provides the effect of OLMo-2-1124-13B install, users can directly use OLMo-2-1124-13B installed effect in huggingface.co for debugging and trial. It also supports api for free installation.