The model was trained on English and GitHub code. As such it is
not
an instruction model and commands like "Write a function that computes the square root." do not work well. However, the instruction-tuned version in
StarChat
makes a capable assistant.
Feel free to share your generations in the Community tab!
Generation
# pip install -q transformersfrom transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/starcoderplus"
device = "cuda"# for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
Fill-in-the-middle
Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:
The training code dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a
search index
that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
Limitations
The model has been trained on a mixture of English text from the web and GitHub code. Therefore it might encounter limitations when working with non-English text, and can carry the stereotypes and biases commonly encountered online.
Additionally, the generated code should be used with caution as it may contain errors, inefficiencies, or potential vulnerabilities. For a more comprehensive understanding of the base model's code limitations, please refer to See
StarCoder paper
.
Training
StarCoderPlus is a fine-tuned version on 600B English and code tokens of StarCoderBase, which was pre-trained on 1T code tokens. Below are the fine-tuning details:
Model
Architecture:
GPT-2 model with multi-query attention and Fill-in-the-Middle objective
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement
here
.
Runs of bigcode starcoderplus-megatron on huggingface.co
0
Total runs
0
24-hour runs
0
3-day runs
0
7-day runs
0
30-day runs
More Information About starcoderplus-megatron huggingface.co Model
starcoderplus-megatron huggingface.co
starcoderplus-megatron huggingface.co is an AI model on huggingface.co that provides starcoderplus-megatron's model effect (), which can be used instantly with this bigcode starcoderplus-megatron model. huggingface.co supports a free trial of the starcoderplus-megatron model, and also provides paid use of the starcoderplus-megatron. Support call starcoderplus-megatron model through api, including Node.js, Python, http.
starcoderplus-megatron huggingface.co is an online trial and call api platform, which integrates starcoderplus-megatron's modeling effects, including api services, and provides a free online trial of starcoderplus-megatron, you can try starcoderplus-megatron online for free by clicking the link below.
bigcode starcoderplus-megatron online free url in huggingface.co:
starcoderplus-megatron is an open source model from GitHub that offers a free installation service, and any user can find starcoderplus-megatron on GitHub to install. At the same time, huggingface.co provides the effect of starcoderplus-megatron install, users can directly use starcoderplus-megatron installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
starcoderplus-megatron install url in huggingface.co: