tiiuae / falcon-7b

huggingface.co
Total runs: 126.9K
24-hour runs: 1.8K
7-day runs: 7.9K
30-day runs: 43.5K
Model's Last Updated: October 12 2024
text-generation

Introduction of falcon-7b

Model Details of falcon-7b

🚀 Falcon-7B

Falcon-7B is a 7B parameters causal decoder-only model built by TII and trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license.

Paper coming soon 😊.

🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading this great blogpost fron HF !

Why use Falcon-7B?

⚠️ This is a raw, pretrained model, which should be further finetuned for most usecases. If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at Falcon-7B-Instruct .

🔥 Looking for an even more powerful model? Falcon-40B is Falcon-7B's big brother!

from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch

model = "tiiuae/falcon-7b"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
    device_map="auto",
)
sequences = pipeline(
   "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
    max_length=200,
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")

💥 Falcon LLMs require PyTorch 2.0 for use with transformers !

For fast inference with Falcon, check-out Text Generation Inference ! Read more in this blogpost .

You will need at least 16GB of memory to swiftly run inference with Falcon-7B.

Model Card for Falcon-7B

Model Details
Model Description
  • Developed by: https://www.tii.ae ;
  • Model type: Causal decoder-only;
  • Language(s) (NLP): English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
  • License: Apache 2.0.
Model Source
  • Paper: coming soon .
Uses
Direct Use

Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)

Out-of-Scope Use

Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.

Bias, Risks, and Limitations

Falcon-7B is trained on English and French data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.

Recommendations

We recommend users of Falcon-7B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.

How to Get Started with the Model
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch

model = "tiiuae/falcon-7b"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
    device_map="auto",
)
sequences = pipeline(
   "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
    max_length=200,
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")
Training Details
Training Data

Falcon-7B was trained on 1,500B tokens of RefinedWeb , a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. Significant components from our curated copora were inspired by The Pile ( Gao et al., 2020 ).

Data source Fraction Tokens Sources
RefinedWeb-English 79% 1,185B massive web crawl
Books 7% 110B
Conversations 6% 85B Reddit, StackOverflow, HackerNews
Code 3% 45B
RefinedWeb-French 3% 45B massive web crawl
Technical 2% 30B arXiv, PubMed, USPTO, etc.

The data was tokenized with the Falcon- 7B / 40B tokenizer.

Training Procedure

Falcon-7B was trained on 384 A100 40GB GPUs, using a 2D parallelism strategy (PP=2, DP=192) combined with ZeRO.

Training Hyperparameters
Hyperparameter Value Comment
Precision bfloat16
Optimizer AdamW
Learning rate 6e-4 4B tokens warm-up, cosine decay to 1.2e-5
Weight decay 1e-1
Z-loss 1e-4
Batch size 2304 30B tokens ramp-up
Speeds, Sizes, Times

Training happened in early March 2023 and took about two weeks.

Evaluation

Paper coming soon .

See the OpenLLM Leaderboard for early results.

Technical Specifications
Model Architecture and Objective

Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).

The architecture is broadly adapted from the GPT-3 paper ( Brown et al., 2020 ), with the following differences:

Hyperparameter Value Comment
Layers 32
d_model 4544 Increased to compensate for multiquery
head_dim 64 Reduced to optimise for FlashAttention
Vocabulary 65024
Sequence length 2048
Compute Infrastructure
Hardware

Falcon-7B was trained on AWS SageMaker, on 384 A100 40GB GPUs in P4d instances.

Software

Falcon-7B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)

Citation

Paper coming soon 😊. In the meanwhile, you can use the following information to cite:

@article{falcon40b,
  title={{Falcon-40B}: an open large language model with state-of-the-art performance},
  author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
  year={2023}
}

To learn more about the pretraining dataset, see the 📓 RefinedWeb paper .

@article{refinedweb,
  title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
  author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
  journal={arXiv preprint arXiv:2306.01116},
  eprint={2306.01116},
  eprinttype = {arXiv},
  url={https://arxiv.org/abs/2306.01116},
  year={2023}
}
License

Falcon-7B is made available under the Apache 2.0 license.

Contact

[email protected]

Runs of tiiuae falcon-7b on huggingface.co

126.9K
Total runs
1.8K
24-hour runs
1.3K
3-day runs
7.9K
7-day runs
43.5K
30-day runs

More Information About falcon-7b huggingface.co Model

More falcon-7b license Visit here:

https://choosealicense.com/licenses/apache-2.0

falcon-7b huggingface.co

falcon-7b huggingface.co is an AI model on huggingface.co that provides falcon-7b's model effect (), which can be used instantly with this tiiuae falcon-7b model. huggingface.co supports a free trial of the falcon-7b model, and also provides paid use of the falcon-7b. Support call falcon-7b model through api, including Node.js, Python, http.

falcon-7b huggingface.co Url

https://huggingface.co/tiiuae/falcon-7b

tiiuae falcon-7b online free

falcon-7b huggingface.co is an online trial and call api platform, which integrates falcon-7b's modeling effects, including api services, and provides a free online trial of falcon-7b, you can try falcon-7b online for free by clicking the link below.

tiiuae falcon-7b online free url in huggingface.co:

https://huggingface.co/tiiuae/falcon-7b

falcon-7b install

falcon-7b is an open source model from GitHub that offers a free installation service, and any user can find falcon-7b on GitHub to install. At the same time, huggingface.co provides the effect of falcon-7b install, users can directly use falcon-7b installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

falcon-7b install url in huggingface.co:

https://huggingface.co/tiiuae/falcon-7b

Url of falcon-7b

falcon-7b huggingface.co Url

Provider of falcon-7b huggingface.co

tiiuae
ORGANIZATIONS

Other API from tiiuae

huggingface.co

Total runs: 150.1K
Run Growth: 31.3K
Growth Rate: 20.84%
Updated: August 09 2024
huggingface.co

Total runs: 34.9K
Run Growth: -48.7K
Growth Rate: -139.25%
Updated: July 13 2023
huggingface.co

Total runs: 15.5K
Run Growth: -1.2K
Growth Rate: -7.86%
Updated: September 04 2024
huggingface.co

Total runs: 8.7K
Run Growth: -1.3K
Growth Rate: -14.60%
Updated: June 12 2024
huggingface.co

Total runs: 3.4K
Run Growth: 374
Growth Rate: 11.12%
Updated: November 07 2024
huggingface.co

Total runs: 3.0K
Run Growth: -2.1K
Growth Rate: -67.66%
Updated: September 06 2023
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: June 06 2024