Falcon-7B is a 7B parameters causal decoder-only model built by
TII
and trained on 1,500B tokens of
RefinedWeb
enhanced with curated corpora. It is made available under the Apache 2.0 license.
Paper coming soon
😊.
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading
this great blogpost fron HF
!
It is made available under a permissive Apache 2.0 license allowing for commercial use
, without any royalties or restrictions.
⚠️
This is a raw, pretrained model, which should be further finetuned for most usecases.
If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at
Falcon-7B-Instruct
.
🔥
Looking for an even more powerful model?
Falcon-40B
is Falcon-7B's big brother!
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
💥
Falcon LLMs require PyTorch 2.0 for use with
transformers
!
Language(s) (NLP):
English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
License:
Apache 2.0.
Model Source
Paper:
coming soon
.
Uses
Direct Use
Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
Bias, Risks, and Limitations
Falcon-7B is trained on English and French data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
Recommendations
We recommend users of Falcon-7B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
How to Get Started with the Model
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
Training Details
Training Data
Falcon-7B was trained on 1,500B tokens of
RefinedWeb
, a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. Significant components from our curated copora were inspired by The Pile (
Gao et al., 2020
).
Decoder-block:
parallel attention/MLP with a single layer norm.
Hyperparameter
Value
Comment
Layers
32
d_model
4544
Increased to compensate for multiquery
head_dim
64
Reduced to optimise for FlashAttention
Vocabulary
65024
Sequence length
2048
Compute Infrastructure
Hardware
Falcon-7B was trained on AWS SageMaker, on 384 A100 40GB GPUs in P4d instances.
Software
Falcon-7B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
Citation
Paper coming soon
😊. In the meanwhile, you can use the following information to cite:
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
To learn more about the pretraining dataset, see the 📓
RefinedWeb paper
.
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
License
Falcon-7B is made available under the Apache 2.0 license.
falcon-7b huggingface.co is an AI model on huggingface.co that provides falcon-7b's model effect (), which can be used instantly with this tiiuae falcon-7b model. huggingface.co supports a free trial of the falcon-7b model, and also provides paid use of the falcon-7b. Support call falcon-7b model through api, including Node.js, Python, http.
falcon-7b huggingface.co is an online trial and call api platform, which integrates falcon-7b's modeling effects, including api services, and provides a free online trial of falcon-7b, you can try falcon-7b online for free by clicking the link below.
tiiuae falcon-7b online free url in huggingface.co:
falcon-7b is an open source model from GitHub that offers a free installation service, and any user can find falcon-7b on GitHub to install. At the same time, huggingface.co provides the effect of falcon-7b install, users can directly use falcon-7b installed effect in huggingface.co for debugging and trial. It also supports api for free installation.