allenai / OLMo-2-1124-13B

huggingface.co
Total runs: 12.4K
24-hour runs: 0
7-day runs: -6
30-day runs: 8.0K
Model's Last Updated: 1月 06 2025

Introduction of OLMo-2-1124-13B

Model Details of OLMo-2-1124-13B

Model Details
OLMo Logo

Model Card for OLMo 2 13B

We introduce OLMo 2, a new family of 7B and 13B models trained on up to 5T tokens. These models are on par with or better than equivalently-sized fully-open models, and competitive with open-weight models from Meta and Mistral on English academic benchmarks.

OLMo is a series of O pen L anguage Mo dels designed to enable the science of language models. These models are trained on the Dolma dataset. We are releasing all code, checkpoints, logs (coming soon), and associated training details. The core models released in this batch include the following:

Size Training Tokens Layers Hidden Size Attention Heads Context Length
OLMo 2 7B 4 Trillion 32 4096 32 4096
OLMo 2 13B 5 Trillion 40 5120 40 4096

The core models released in this batch include the following:

Installation

OLMo 2 will be supported in the next version of Transformers, and you need to install it from the main branch using:

pip install --upgrade git+https://github.com/huggingface/transformers.git
Inference

You can use OLMo with the standard HuggingFace transformers library:

from transformers import AutoModelForCausalLM, AutoTokenizer
olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-13B")
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-2-1124-13B")
message = ["Language modeling is "]
inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
# optional verifying cuda
# inputs = {k: v.to('cuda') for k,v in inputs.items()}
# olmo = olmo.to('cuda')
response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
>> 'Language modeling is  a key component of any text-based application, but its effectiveness...'

For faster performance, you can quantize the model using the following method:

AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-13B", 
    torch_dtype=torch.float16, 
    load_in_8bit=True)  # Requires bitsandbytes

The quantized model is more sensitive to data types and CUDA operations. To avoid potential issues, it's recommended to pass the inputs directly to CUDA using:

inputs.input_ids.to('cuda')

We have released checkpoints for these models. For pretraining, the naming convention is stepXXX-tokensYYYB . For checkpoints with ingredients of the soup, the naming convention is stage2-ingredientN-stepXXX-tokensYYYB

To load a specific model revision with HuggingFace, simply add the argument revision :

olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-13B", revision="step102500-tokens860B")

Or, you can access all the revisions for the models via the following code snippet:

from huggingface_hub import list_repo_refs
out = list_repo_refs("allenai/OLMo-2-1124-13B")
branches = [b.name for b in out.branches]
Fine-tuning

Model fine-tuning can be done from the final checkpoint (the main revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.

  1. Fine-tune with the OLMo repository:
torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config} \
    --data.paths=[{path_to_data}/input_ids.npy] \
    --data.label_mask_paths=[{path_to_data}/label_mask.npy] \
    --load_path={path_to_checkpoint} \
    --reset_trainer_state

For more documentation, see the GitHub readme .

  1. Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are here .
Model Description
  • Developed by: Allen Institute for AI (Ai2)
  • Model type: a Transformer style autoregressive language model.
  • Language(s) (NLP): English
  • License: The code and model are released under Apache 2.0.
  • Contact: Technical inquiries: olmo@allenai.org . Press: press@allenai.org
  • Date cutoff: Dec. 2023.
Model Sources
Evaluation

Core model results for OLMo 2 7B and 13B models are found below.

Model Train FLOPs Average ARC/C HSwag WinoG MMLU DROP NQ AGIEval GSM8k MMLUPro TriviaQA
Open weights models:
Llama-2-13B 1.6·10²³ 54.1 67.3 83.9 74.9 55.7 45.6 38.4 41.5 28.1 23.9 81.3
Mistral-7B-v0.3 n/a 58.8 78.3 83.1 77.7 63.5 51.8 37.2 47.3 40.1 30 79.3
Llama-3.1-8B 7.2·10²³ 61.8 79.5 81.6 76.6 66.9 56.4 33.9 51.3 56.5 34.7 80.3
Mistral-Nemo-12B n/a 66.9 85.2 85.6 81.5 69.5 69.2 39.7 54.7 62.1 36.7 84.6
Qwen-2.5-7B 8.2·10²³ 67.4 89.5 89.7 74.2 74.4 55.8 29.9 63.7 81.5 45.8 69.4
Gemma-2-9B 4.4·10²³ 67.8 89.5 87.3 78.8 70.6 63 38 57.3 70.1 42 81.8
Qwen-2.5-14B 16.0·10²³ 72.2 94 94 80 79.3 51.5 37.3 71 83.4 52.8 79.1
Partially open models:
StableLM-2-12B 2.9·10²³ 62.2 81.9 84.5 77.7 62.4 55.5 37.6 50.9 62 29.3 79.9
Zamba-2-7B n/c 65.2 92.2 89.4 79.6 68.5 51.7 36.5 55.5 67.2 32.8 78.8
Fully open models:
Amber-7B 0.5·10²³ 35.2 44.9 74.5 65.5 24.7 26.1 18.7 21.8 4.8 11.7 59.3
OLMo-7B 1.0·10²³ 38.3 46.4 78.1 68.5 28.3 27.3 24.8 23.7 9.2 12.1 64.1
MAP-Neo-7B 2.1·10²³ 49.6 78.4 72.8 69.2 58 39.4 28.9 45.8 12.5 25.9 65.1
OLMo-0424-7B 0.9·10²³ 50.7 66.9 80.1 73.6 54.3 50 29.6 43.9 27.7 22.1 58.8
DCLM-7B 1.0·10²³ 56.9 79.8 82.3 77.3 64.4 39.3 28.8 47.5 46.1 31.3 72.1
OLMo-2-1124-7B 1.8·10²³ 62.9 79.8 83.8 77.2 63.7 60.8 36.9 50.4 67.5 31 78
OLMo-2-1124-13B 4.6·10²³ 68.3 83.5 86.4 81.5 67.5 70.7 46.7 54.2 75.1 35.1 81.9
Model Details
Pretraining
OLMo 2 7B OLMo 2 13B
Pretraining Stage 1
( OLMo-Mix-1124 )
4 trillion tokens
(1 epoch)
5 trillion tokens
(1.2 epochs)
Pretraining Stage 2
( Dolmino-Mix-1124 )
50B tokens (3 runs)
merged
100B tokens (3 runs)
300B tokens (1 run)
merged
Post-training
( Tulu 3 SFT OLMo mix )
SFT + DPO + PPO
( preference mix )
SFT + DPO + PPO
( preference mix )
Stage 1: Initial Pretraining
  • Dataset: OLMo-Mix-1124 (3.9T tokens)
  • Coverage: 90%+ of total pretraining budget
  • 7B Model: ~1 epoch
  • 13B Model: 1.2 epochs (5T tokens)
Stage 2: Fine-tuning
  • Dataset: Dolmino-Mix-1124 (843B tokens)
  • Three training mixes:
    • 50B tokens
    • 100B tokens
    • 300B tokens
  • Mix composition: 50% high-quality data + academic/Q&A/instruction/math content
Model Merging
  • 7B Model: 3 versions trained on 50B mix, merged via model souping
  • 13B Model: 3 versions on 100B mix + 1 version on 300B mix, merged for final checkpoint
Bias, Risks, and Limitations

Like any base language model or fine-tuned model without safety filtering, these models can easily be prompted by users to generate harmful and sensitive content. Such content may also be produced unintentionally, especially in cases involving bias, so we recommend that users consider the risks when applying this technology. Additionally, many statements from OLMo or any LLM are often inaccurate, so facts should be verified.

License and use

OLMo 2 is licensed under the Apache 2.0 license. OLMo 2 is intended for research and educational use. For more information, please see our Responsible Use Guidelines .

Citation

A technical manuscript is forthcoming!

Model Card Contact

For errors in this model card, contact olmo@allenai.org .

Runs of allenai OLMo-2-1124-13B on huggingface.co

12.4K
Total runs
0
24-hour runs
6
3-day runs
-6
7-day runs
8.0K
30-day runs

More Information About OLMo-2-1124-13B huggingface.co Model

More OLMo-2-1124-13B license Visit here:

https://choosealicense.com/licenses/apache-2.0

OLMo-2-1124-13B huggingface.co

OLMo-2-1124-13B huggingface.co is an AI model on huggingface.co that provides OLMo-2-1124-13B's model effect (), which can be used instantly with this allenai OLMo-2-1124-13B model. huggingface.co supports a free trial of the OLMo-2-1124-13B model, and also provides paid use of the OLMo-2-1124-13B. Support call OLMo-2-1124-13B model through api, including Node.js, Python, http.

OLMo-2-1124-13B huggingface.co Url

https://huggingface.co/allenai/OLMo-2-1124-13B

allenai OLMo-2-1124-13B online free

OLMo-2-1124-13B huggingface.co is an online trial and call api platform, which integrates OLMo-2-1124-13B's modeling effects, including api services, and provides a free online trial of OLMo-2-1124-13B, you can try OLMo-2-1124-13B online for free by clicking the link below.

allenai OLMo-2-1124-13B online free url in huggingface.co:

https://huggingface.co/allenai/OLMo-2-1124-13B

OLMo-2-1124-13B install

OLMo-2-1124-13B is an open source model from GitHub that offers a free installation service, and any user can find OLMo-2-1124-13B on GitHub to install. At the same time, huggingface.co provides the effect of OLMo-2-1124-13B install, users can directly use OLMo-2-1124-13B installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

OLMo-2-1124-13B install url in huggingface.co:

https://huggingface.co/allenai/OLMo-2-1124-13B

Url of OLMo-2-1124-13B

OLMo-2-1124-13B huggingface.co Url

Provider of OLMo-2-1124-13B huggingface.co

allenai
ORGANIZATIONS

Other API from allenai

huggingface.co

Total runs: 91.7K
Run Growth: 78.6K
Growth Rate: 85.70%
Updated: 10月 18 2023
huggingface.co

Total runs: 61.6K
Run Growth: -50.5K
Growth Rate: -81.96%
Updated: 12月 04 2024
huggingface.co

Total runs: 23.0K
Run Growth: 7.7K
Growth Rate: 33.79%
Updated: 8月 14 2024
huggingface.co

Total runs: 8.5K
Run Growth: 3.3K
Growth Rate: 36.78%
Updated: 7月 16 2024
huggingface.co

Total runs: 6.1K
Run Growth: -21.5K
Growth Rate: -354.06%
Updated: 7月 03 2024
huggingface.co

Total runs: 5.1K
Run Growth: -17.0K
Growth Rate: -321.48%
Updated: 7月 16 2024
huggingface.co

Total runs: 2.5K
Run Growth: -163
Growth Rate: -6.49%
Updated: 12月 04 2024
huggingface.co

Total runs: 1.7K
Run Growth: -110
Growth Rate: -6.43%
Updated: 7月 16 2024
huggingface.co

Total runs: 895
Run Growth: 878
Growth Rate: 98.10%
Updated: 1月 24 2023
huggingface.co

Total runs: 502
Run Growth: -100
Growth Rate: -21.23%
Updated: 1月 24 2023
huggingface.co

Total runs: 486
Run Growth: 256
Growth Rate: 52.67%
Updated: 2月 12 2024
huggingface.co

Total runs: 374
Run Growth: 354
Growth Rate: 94.65%
Updated: 6月 13 2024
huggingface.co

Total runs: 313
Run Growth: -437
Growth Rate: -139.62%
Updated: 4月 30 2024
huggingface.co

Total runs: 297
Run Growth: 159
Growth Rate: 53.54%
Updated: 4月 19 2024