apple / OpenELM-270M

huggingface.co
Total runs: 2.1K
24-hour runs: -22
7-day runs: 96
30-day runs: 1.2K
Model's Last Updated: Juillet 18 2024
text-generation

Introduction of OpenELM-270M

Model Details of OpenELM-270M

OpenELM

Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari

We introduce OpenELM , a family of Open E fficient L anguage M odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the CoreNet library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. We release the complete framework, encompassing data preparation, training, fine-tuning, and evaluation procedures, alongside multiple pre-trained checkpoints and training logs, to facilitate open research.

Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.

Usage

We have provided an example function to generate output from OpenELM models loaded via HuggingFace Hub in generate_openelm.py .

You can try the model by running the following command:

python generate_openelm.py --model apple/OpenELM-270M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2

Please refer to this link to obtain your hugging face access token.

Additional arguments to the hugging face generate function can be passed via generate_kwargs . As an example, to speedup the inference, you can try lookup token speculative generation by passing the prompt_lookup_num_tokens argument as follows:

python generate_openelm.py --model apple/OpenELM-270M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10

Alternatively, try model-wise speculative generation with an assistive model by passing a smaller model through the assistant_model argument, for example:

python generate_openelm.py --model apple/OpenELM-270M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL]
Main Results
Zero-Shot
Model Size ARC-c ARC-e BoolQ HellaSwag PIQA SciQ WinoGrande Average
OpenELM-270M 26.45 45.08 53.98 46.71 69.75 84.70 53.91 54.37
OpenELM-270M-Instruct 30.55 46.68 48.56 52.07 70.78 84.40 52.72 55.11
OpenELM-450M 27.56 48.06 55.78 53.97 72.31 87.20 58.01 57.56
OpenELM-450M-Instruct 30.38 50.00 60.37 59.34 72.63 88.00 58.96 59.95
OpenELM-1_1B 32.34 55.43 63.58 64.81 75.57 90.60 61.72 63.44
OpenELM-1_1B-Instruct 37.97 52.23 70.00 71.20 75.03 89.30 62.75 65.50
OpenELM-3B 35.58 59.89 67.40 72.44 78.24 92.70 65.51 67.39
OpenELM-3B-Instruct 39.42 61.74 68.17 76.36 79.00 92.50 66.85 69.15
LLM360
Model Size ARC-c HellaSwag MMLU TruthfulQA WinoGrande Average
OpenELM-270M 27.65 47.15 25.72 39.24 53.83 38.72
OpenELM-270M-Instruct 32.51 51.58 26.70 38.72 53.20 40.54
OpenELM-450M 30.20 53.86 26.01 40.18 57.22 41.50
OpenELM-450M-Instruct 33.53 59.31 25.41 40.48 58.33 43.41
OpenELM-1_1B 36.69 65.71 27.05 36.98 63.22 45.93
OpenELM-1_1B-Instruct 41.55 71.83 25.65 45.95 64.72 49.94
OpenELM-3B 42.24 73.28 26.76 34.98 67.25 48.90
OpenELM-3B-Instruct 47.70 76.87 24.80 38.76 67.96 51.22
OpenLLM Leaderboard
Model Size ARC-c CrowS-Pairs HellaSwag MMLU PIQA RACE TruthfulQA WinoGrande Average
OpenELM-270M 27.65 66.79 47.15 25.72 69.75 30.91 39.24 53.83 45.13
OpenELM-270M-Instruct 32.51 66.01 51.58 26.70 70.78 33.78 38.72 53.20 46.66
OpenELM-450M 30.20 68.63 53.86 26.01 72.31 33.11 40.18 57.22 47.69
OpenELM-450M-Instruct 33.53 67.44 59.31 25.41 72.63 36.84 40.48 58.33 49.25
OpenELM-1_1B 36.69 71.74 65.71 27.05 75.57 36.46 36.98 63.22 51.68
OpenELM-1_1B-Instruct 41.55 71.02 71.83 25.65 75.03 39.43 45.95 64.72 54.40
OpenELM-3B 42.24 73.29 73.28 26.76 78.24 38.76 34.98 67.25 54.35
OpenELM-3B-Instruct 47.70 72.33 76.87 24.80 79.00 38.47 38.76 67.96 55.73

See the technical report for more results and comparison.

Evaluation
Setup

Install the following dependencies:


# install public lm-eval-harness

harness_repo="public-lm-eval-harness"
git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo}
cd ${harness_repo}
# use main branch on 03-15-2024, SHA is dc90fec
git checkout dc90fec
pip install -e .
cd ..

# 66d6242 is the main branch on 2024-04-01 
pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242
pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0
Evaluate OpenELM

# OpenELM-270M
hf_model=apple/OpenELM-270M

# this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True
tokenizer=meta-llama/Llama-2-7b-hf
add_bos_token=True
batch_size=1

mkdir lm_eval_output

shot=0
task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
lm_eval --model hf \
        --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
        --tasks ${task} \
        --device cuda:0 \
        --num_fewshot ${shot} \
        --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
        --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log

shot=5
task=mmlu,winogrande
lm_eval --model hf \
        --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
        --tasks ${task} \
        --device cuda:0 \
        --num_fewshot ${shot} \
        --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
        --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log

shot=25
task=arc_challenge,crows_pairs_english
lm_eval --model hf \
        --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
        --tasks ${task} \
        --device cuda:0 \
        --num_fewshot ${shot} \
        --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
        --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log

shot=10
task=hellaswag
lm_eval --model hf \
        --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
        --tasks ${task} \
        --device cuda:0 \
        --num_fewshot ${shot} \
        --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
        --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
Bias, Risks, and Limitations

The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements.

Citation

If you find our work useful, please cite:

@article{mehtaOpenELMEfficientLanguage2024,
    title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}},
    shorttitle = {{OpenELM}},
    url = {https://arxiv.org/abs/2404.14619v1},
    language = {en},
    urldate = {2024-04-24},
    journal = {arXiv.org},
    author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad},
    month = apr,
    year = {2024},
}

@inproceedings{mehta2022cvnets, 
     author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad}, 
     title = {CVNets: High Performance Library for Computer Vision}, 
     year = {2022}, 
     booktitle = {Proceedings of the 30th ACM International Conference on Multimedia}, 
     series = {MM '22} 
}

Runs of apple OpenELM-270M on huggingface.co

2.1K
Total runs
-22
24-hour runs
-49
3-day runs
96
7-day runs
1.2K
30-day runs

More Information About OpenELM-270M huggingface.co Model

OpenELM-270M huggingface.co

OpenELM-270M huggingface.co is an AI model on huggingface.co that provides OpenELM-270M's model effect (), which can be used instantly with this apple OpenELM-270M model. huggingface.co supports a free trial of the OpenELM-270M model, and also provides paid use of the OpenELM-270M. Support call OpenELM-270M model through api, including Node.js, Python, http.

OpenELM-270M huggingface.co Url

https://huggingface.co/apple/OpenELM-270M

apple OpenELM-270M online free

OpenELM-270M huggingface.co is an online trial and call api platform, which integrates OpenELM-270M's modeling effects, including api services, and provides a free online trial of OpenELM-270M, you can try OpenELM-270M online for free by clicking the link below.

apple OpenELM-270M online free url in huggingface.co:

https://huggingface.co/apple/OpenELM-270M

OpenELM-270M install

OpenELM-270M is an open source model from GitHub that offers a free installation service, and any user can find OpenELM-270M on GitHub to install. At the same time, huggingface.co provides the effect of OpenELM-270M install, users can directly use OpenELM-270M installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

OpenELM-270M install url in huggingface.co:

https://huggingface.co/apple/OpenELM-270M

Url of OpenELM-270M

OpenELM-270M huggingface.co Url

Provider of OpenELM-270M huggingface.co

apple
ORGANIZATIONS

Other API from apple

huggingface.co

Total runs: 643.5K
Run Growth: 16.6K
Growth Rate: 2.65%
Updated: Août 29 2022
huggingface.co

Total runs: 7.0K
Run Growth: 3.6K
Growth Rate: 90.83%
Updated: Février 07 2025
huggingface.co

Total runs: 1.9K
Run Growth: -151
Growth Rate: -8.13%
Updated: Octobre 09 2024
huggingface.co

Total runs: 1.4K
Run Growth: 434
Growth Rate: 72.70%
Updated: Juillet 08 2024
huggingface.co

Total runs: 781
Run Growth: 368
Growth Rate: 46.76%
Updated: Juillet 18 2024
huggingface.co

Total runs: 647
Run Growth: -1.2K
Growth Rate: -189.72%
Updated: Juillet 26 2024
huggingface.co

Total runs: 536
Run Growth: 148
Growth Rate: 27.01%
Updated: Juillet 18 2024
huggingface.co

Total runs: 422
Run Growth: 150
Growth Rate: 37.59%
Updated: Juillet 18 2024
huggingface.co

Total runs: 183
Run Growth: 91
Growth Rate: 51.70%
Updated: Août 01 2024
huggingface.co

Total runs: 96
Run Growth: 84
Growth Rate: 89.36%
Updated: Janvier 19 2024
huggingface.co

Total runs: 90
Run Growth: -557
Growth Rate: -574.23%
Updated: Juin 13 2024
huggingface.co

Total runs: 73
Run Growth: 34
Growth Rate: 47.22%
Updated: Janvier 19 2024
huggingface.co

Total runs: 68
Run Growth: -72
Growth Rate: -107.46%
Updated: Janvier 19 2024
huggingface.co

Total runs: 33
Run Growth: 3
Growth Rate: 12.00%
Updated: Août 06 2024
huggingface.co

Total runs: 31
Run Growth: 17
Growth Rate: 58.62%
Updated: Juillet 22 2024
huggingface.co

Total runs: 21
Run Growth: 4
Growth Rate: 20.00%
Updated: Octobre 05 2024
huggingface.co

Total runs: 20
Run Growth: -3
Growth Rate: -15.00%
Updated: Juillet 22 2024
huggingface.co

Total runs: 19
Run Growth: -3
Growth Rate: -18.75%
Updated: Janvier 19 2024
huggingface.co

Total runs: 17
Run Growth: -2
Growth Rate: -12.50%
Updated: Juillet 22 2024
huggingface.co

Total runs: 7
Run Growth: -17
Growth Rate: -242.86%
Updated: Juillet 22 2024
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: Mai 02 2024
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: Janvier 22 2024