stabilityai / stablelm-2-1_6b

huggingface.co
Total runs: 5.5K
24-hour runs: 0
7-day runs: 237
30-day runs: 1.0K
Model's Last Updated: July 10 2024
text-generation

Introduction of stablelm-2-1_6b

Model Details of stablelm-2-1_6b

Stable LM 2 1.6B

Please note: For commercial use, please refer to https://stability.ai/license

Model Description

Stable LM 2 1.6B is a 1.6 billion parameter decoder-only language model pre-trained on 2 trillion tokens of diverse multilingual and code datasets for two epochs.

Usage

Get started generating text with Stable LM 2 1.6B by using the following code snippet:

from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-2-1_6b")
model = AutoModelForCausalLM.from_pretrained(
  "stabilityai/stablelm-2-1_6b",
  torch_dtype="auto",
)
model.cuda()
inputs = tokenizer("The weather is always wonderful", return_tensors="pt").to(model.device)
tokens = model.generate(
  **inputs,
  max_new_tokens=64,
  temperature=0.70,
  top_p=0.95,
  do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
Run with Flash Attention 2 ⚡️
Click to expand
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-2-1_6b")
model = AutoModelForCausalLM.from_pretrained(
  "stabilityai/stablelm-2-1_6b",
  torch_dtype="auto",
  attn_implementation="flash_attention_2",
)
model.cuda()
inputs = tokenizer("The weather is always wonderful", return_tensors="pt").to(model.device)
tokens = model.generate(
  **inputs,
  max_new_tokens=64,
  temperature=0.70,
  top_p=0.95,
  do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
Model Details
Model Architecture

The model is a decoder-only transformer similar to the LLaMA ( Touvron et al., 2023 ) architecture with the following modifications:

Parameters Hidden Size Layers Heads Sequence Length
1,644,417,024 2048 24 32 4096
Training
Training Dataset

The dataset is comprised of a filtered mixture of open-source large-scale datasets available on the HuggingFace Hub : Falcon RefinedWeb extract ( Penedo et al., 2023 ), RedPajama-Data ( Together Computer., 2023 ) and The Pile ( Gao et al., 2020 ) both without the Books3 subset, and StarCoder ( Li et al., 2023 ). We further supplement our training with multi-lingual data from CulturaX ( Nguyen et al., 2023 ) and, in particular, from its OSCAR corpora, as well as restructured data in the style of Yuan & Liu (2022) .

  • Given the large amount of web data, we recommend fine-tuning the base Stable LM 2 1.6B for your downstream tasks.
Training Procedure

The model is pre-trained on the aforementioned datasets in bfloat16 precision, optimized with AdamW, and trained using the Arcade100k tokenizer with a vocabulary size of 100,352. We outline the complete hyperparameters choices in the project's GitHub repository - config* . The final checkpoint of pre-training, before cooldown, is provided in the global_step420000 branch .

Training Infrastructure
  • Hardware : Stable LM 2 1.6B was trained on the Stability AI cluster across 512 NVIDIA A100 40GB GPUs (AWS P4d instances).

  • Software : We use a fork of gpt-neox ( EleutherAI, 2021 ), train under 2D parallelism (Data and Tensor Parallel) with ZeRO-1 ( Rajbhandari et al., 2019 ), and rely on flash-attention as well as SwiGLU and Rotary Embedding kernels from FlashAttention-2 ( Dao et al., 2023 )

Use and Limitations
Intended Use

The model is intended to be used as a foundational base model for application-specific fine-tuning. Developers must evaluate and fine-tune the model for safe performance in downstream applications. For commercial use, please refer to https://stability.ai/membership .

Limitations and Bias

​ As a base model, this model may exhibit unreliable, unsafe, or other undesirable behaviors that must be corrected through evaluation and fine-tuning prior to deployment. The pre-training dataset may have contained offensive or inappropriate content, even after applying data cleansing filters, which can be reflected in the model-generated text. We recommend that users exercise caution when using these models in production systems. Do not use the models if they are unsuitable for your application, or for any applications that may cause deliberate or unintentional harm to others.

How to Cite
@article{bellagente2024stable,
  title={Stable LM 2 1.6 B Technical Report},
  author={Bellagente, Marco and Tow, Jonathan and Mahan, Dakota and Phung, Duy and Zhuravinskyi, Maksym and Adithyan, Reshinth and Baicoianu, James and Brooks, Ben and Cooper, Nathan and Datta, Ashish and others},
  journal={arXiv preprint arXiv:2402.17834},
  year={2024}
}

Runs of stabilityai stablelm-2-1_6b on huggingface.co

5.5K
Total runs
0
24-hour runs
0
3-day runs
237
7-day runs
1.0K
30-day runs

More Information About stablelm-2-1_6b huggingface.co Model

More stablelm-2-1_6b license Visit here:

https://choosealicense.com/licenses/other

stablelm-2-1_6b huggingface.co

stablelm-2-1_6b huggingface.co is an AI model on huggingface.co that provides stablelm-2-1_6b's model effect (), which can be used instantly with this stabilityai stablelm-2-1_6b model. huggingface.co supports a free trial of the stablelm-2-1_6b model, and also provides paid use of the stablelm-2-1_6b. Support call stablelm-2-1_6b model through api, including Node.js, Python, http.

stabilityai stablelm-2-1_6b online free

stablelm-2-1_6b huggingface.co is an online trial and call api platform, which integrates stablelm-2-1_6b's modeling effects, including api services, and provides a free online trial of stablelm-2-1_6b, you can try stablelm-2-1_6b online for free by clicking the link below.

stabilityai stablelm-2-1_6b online free url in huggingface.co:

https://huggingface.co/stabilityai/stablelm-2-1_6b

stablelm-2-1_6b install

stablelm-2-1_6b is an open source model from GitHub that offers a free installation service, and any user can find stablelm-2-1_6b on GitHub to install. At the same time, huggingface.co provides the effect of stablelm-2-1_6b install, users can directly use stablelm-2-1_6b installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

stablelm-2-1_6b install url in huggingface.co:

https://huggingface.co/stabilityai/stablelm-2-1_6b

Url of stablelm-2-1_6b

stablelm-2-1_6b huggingface.co Url

Provider of stablelm-2-1_6b huggingface.co

stabilityai
ORGANIZATIONS

Other API from stabilityai

huggingface.co

Total runs: 161.5K
Run Growth: 26.8K
Growth Rate: 17.30%
Updated: August 04 2023
huggingface.co

Total runs: 129.7K
Run Growth: 2.8K
Growth Rate: 2.13%
Updated: July 10 2024
huggingface.co

Total runs: 33.7K
Run Growth: 3.4K
Growth Rate: 9.99%
Updated: August 09 2024
huggingface.co

Total runs: 363
Run Growth: -97.9K
Growth Rate: -26316.13%
Updated: August 03 2024
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: July 10 2024
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: April 14 2024