Falcon2-11B is an 11B parameters causal decoder-only model built by
TII
and trained on over 5,000B tokens of
RefinedWeb
enhanced with curated corpora. The model is made available under the
TII Falcon License 2.0
, the permissive Apache 2.0-based software license which includes an
acceptable use policy
that promotes the responsible use of AI.
Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
Bias, Risks, and Limitations
Falcon2-11B is trained mostly on English, but also German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish. It will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
Recommendations
We recommend users of Falcon2-11B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
How to Get Started with the Model
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-11B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto",
)
sequences = pipeline(
"Can you explain the concepts of Quantum Computing?",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
Training Details
Training Data
Falcon2-11B was trained over 5,000B tokens of
RefinedWeb
, a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. It followed a four stage training strategy. The first three stages were focused on increasing the context length, from to 2048 to 4096 and finally to 8192 tokens. The last stage aimed to further enhance performance using only high quality data.
Overall, the data sources included RefinedWeb-English, Refined Web-Europe (cs, de, es, fr, it, nl, pl, pt, ro, sv), high quality technical data, code data, and conversational data extracted from public sources.
The training stages were as follows:
Stage
Context length
Tokens
Stage 1
2048
4500 B
Stage 2
4096
250 B
Stage 3
8192
250 B
Stage 4
8192
500 B
The data was tokenized with the Falcon-
7B
/
11B
tokenizer.
Training Procedure
Falcon2-11B was trained on 1024 A100 40GB GPUs for the majority of the training, using a 3D parallelism strategy (TP=8, PP=1, DP=128) combined with ZeRO and Flash-Attention 2.
Training Hyperparameters
Hyperparameter
Value
Comment
Precision
bfloat16
Optimizer
AdamW
Max learning rate
3.7e-4
Following a linear warm-up, then cosine decay to 1.89e-5 across 4500 B tokens.
Weight decay
1e-1
Z-loss
1e-4
Batch size
Variable
Batch size was gradually increased during the training
Speeds, Sizes, Times
The model training took roughly two months.
Evaluation
English Benchmark
Value
ARC-Challenge-25shots
59.73
HellaSwag-10shots
82.91
MMLU-5shots
58.37
Winogrande-5shots
78.30
TruthfulQA-0shot
52.56
GSM8k-5shots
53.83
ARC-Challenge-0shot
50.17
ARC-Easy-0shot
77.78
Hellaswag-0shot
82.07
We thank the leaderboard team from HuggingFace for providing an official evaluation of our model on the leaderboard tasks.
Technical Specifications
Model Architecture and Objective
Falcon2-11B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper (
Brown et al., 2020
), with the following differences:
Falcon2-11B was trained on AWS SageMaker, using on average 1024 A100 40GB GPUs in 128 p4d instances.
Software
Falcon2-11B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO, high-performance Triton kernels and FlashAttention-2. More details about the distributed training strategy can be found in
Almazrouei et.al
.
falcon-11B huggingface.co is an AI model on huggingface.co that provides falcon-11B's model effect (), which can be used instantly with this tiiuae falcon-11B model. huggingface.co supports a free trial of the falcon-11B model, and also provides paid use of the falcon-11B. Support call falcon-11B model through api, including Node.js, Python, http.
falcon-11B huggingface.co is an online trial and call api platform, which integrates falcon-11B's modeling effects, including api services, and provides a free online trial of falcon-11B, you can try falcon-11B online for free by clicking the link below.
tiiuae falcon-11B online free url in huggingface.co:
falcon-11B is an open source model from GitHub that offers a free installation service, and any user can find falcon-11B on GitHub to install. At the same time, huggingface.co provides the effect of falcon-11B install, users can directly use falcon-11B installed effect in huggingface.co for debugging and trial. It also supports api for free installation.