bigcode / santacoder

huggingface.co
Total runs: 7.0K
24-hour runs: 0
7-day runs: 218
30-day runs: -48.3K
Model's Last Updated: October 12 2023
text-generation

Introduction of santacoder

Model Details of santacoder

SantaCoder

banner

Play with the model on the SantaCoder Space Demo .

Table of Contents

  1. Model Summary
  2. Use
  3. Limitations
  4. Training
  5. License
  6. Citation

Model Summary

The SantaCoder models are a series of 1.1B parameter models trained on the Python, Java, and JavaScript subset of The Stack (v1.1) (which excluded opt-out requests). The main model uses Multi Query Attention , a context window of 2048 tokens, and was trained using near-deduplication and comment-to-code ratio as filtering criteria and using the Fill-in-the-Middle objective . In addition there are several models that were trained on datasets with different filter parameters and with architecture and objective variations.

Model Architecture Objective Filtering
mha MHA AR + FIM Base
no-fim MQA AR Base
fim MQA AR + FIM Base
stars MQA AR + FIM GitHub stars
fertility MQA AR + FIM Tokenizer fertility
comments MQA AR + FIM Comment-to-code ratio
dedup-alt MQA AR + FIM Stronger near-deduplication
final MQA AR + FIM Stronger near-deduplication and comment-to-code ratio

The final model is the best performing model and was trained twice as long (236B tokens) as the others. This checkpoint is the default model and available on the main branch. All other checkpoints are on separate branches with according names.

Use

Intended use

The model was trained on GitHub code. As such it is not an instruction model and commands like "Write a function that computes the square root." do not work well. You should phrase commands like they occur in source code such as comments (e.g. # the following function computes the sqrt ) or write a function signature and docstring and let the model complete the function body.

Feel free to share your generations in the Community tab!

How to use
Generation
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer

checkpoint = "bigcode/santacoder"
device = "cuda" # for GPU usage or "cpu" for CPU usage

tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, trust_remote_code=True).to(device)

inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
Fill-in-the-middle

Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:

input_text = "<fim-prefix>def print_hello_world():\n    <fim-suffix>\n    print('Hello world!')<fim-middle>"
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))

Make sure to use <fim-prefix>, <fim-suffix>, <fim-middle> and not <fim_prefix>, <fim_suffix>, <fim_middle> as in StarCoder models.

Load other checkpoints

We upload the checkpoint of each experiment to a separate branch as well as the intermediate checkpoints as commits on the branches. You can load them with the revision flag:

model = AutoModelForCausalLM.from_pretrained(
    "bigcode/santacoder",
    revision="no-fim", # name of branch or commit hash
    trust_remote_code=True
)
Attribution & Other Requirements

The pretraining dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a search index that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.

Limitations

The model has been trained on source code in Python, Java, and JavaScript. The predominant language in source is English although other languages are also present. As such the model is capable to generate code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits.

Training

Model
  • Architecture: GPT-2 model with multi-query attention and Fill-in-the-Middle objective
  • Pretraining steps: 600K
  • Pretraining tokens: 236 billion
  • Precision: float16
Hardware
  • GPUs: 96 Tesla V100
  • Training time: 6.2 days
  • Total FLOPS: 2.1 x 10e21
Software

License

The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement here .

Citation

@article{allal2023santacoder,
  title={SantaCoder: don't reach for the stars!},
  author={Allal, Loubna Ben and Li, Raymond and Kocetkov, Denis and Mou, Chenghao and Akiki, Christopher and Ferrandis, Carlos Munoz and Muennighoff, Niklas and Mishra, Mayank and Gu, Alex and Dey, Manan and others},
  journal={arXiv preprint arXiv:2301.03988},
  year={2023}
}

Runs of bigcode santacoder on huggingface.co

7.0K
Total runs
0
24-hour runs
168
3-day runs
218
7-day runs
-48.3K
30-day runs

More Information About santacoder huggingface.co Model

santacoder huggingface.co

santacoder huggingface.co is an AI model on huggingface.co that provides santacoder's model effect (), which can be used instantly with this bigcode santacoder model. huggingface.co supports a free trial of the santacoder model, and also provides paid use of the santacoder. Support call santacoder model through api, including Node.js, Python, http.

santacoder huggingface.co Url

https://huggingface.co/bigcode/santacoder

bigcode santacoder online free

santacoder huggingface.co is an online trial and call api platform, which integrates santacoder's modeling effects, including api services, and provides a free online trial of santacoder, you can try santacoder online for free by clicking the link below.

bigcode santacoder online free url in huggingface.co:

https://huggingface.co/bigcode/santacoder

santacoder install

santacoder is an open source model from GitHub that offers a free installation service, and any user can find santacoder on GitHub to install. At the same time, huggingface.co provides the effect of santacoder install, users can directly use santacoder installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

santacoder install url in huggingface.co:

https://huggingface.co/bigcode/santacoder

Url of santacoder

santacoder huggingface.co Url

Provider of santacoder huggingface.co

bigcode
ORGANIZATIONS

Other API from bigcode

huggingface.co

Total runs: 1.1M
Run Growth: 717.0K
Growth Rate: 68.17%
Updated: March 04 2024
huggingface.co

Total runs: 35.3K
Run Growth: 26.7K
Growth Rate: 78.08%
Updated: June 11 2024
huggingface.co

Total runs: 18.6K
Run Growth: 1.0K
Growth Rate: 5.67%
Updated: October 08 2024
huggingface.co

Total runs: 2.1K
Run Growth: -472
Growth Rate: -23.95%
Updated: May 10 2023
huggingface.co

Total runs: 1.6K
Run Growth: -1.9K
Growth Rate: -128.21%
Updated: May 11 2023
huggingface.co

Total runs: 376
Run Growth: -287
Growth Rate: -101.06%
Updated: July 24 2023
huggingface.co

Total runs: 286
Run Growth: 93
Growth Rate: 32.52%
Updated: August 17 2023
huggingface.co

Total runs: 159
Run Growth: -16
Growth Rate: -10.06%
Updated: August 17 2023
huggingface.co

Total runs: 25
Run Growth: -145
Growth Rate: -580.00%
Updated: August 13 2023
huggingface.co

Total runs: 19
Run Growth: 15
Growth Rate: 78.95%
Updated: January 01 2024
huggingface.co

Total runs: 18
Run Growth: 1
Growth Rate: 5.56%
Updated: August 05 2023
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: February 28 2024
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: January 14 2025