michaelfeil / ct2fast-starchat-alpha

huggingface.co
Total runs: 4
24-hour runs: 1
7-day runs: 2
30-day runs: -9
Model's Last Updated: Junho 02 2023

Introduction of ct2fast-starchat-alpha

Model Details of ct2fast-starchat-alpha

# Fast-Inference with Ctranslate2

Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.

quantized version of HuggingFaceH4/starchat-alpha

pip install hf-hub-ctranslate2>=2.0.8 ctranslate2>=3.14.0

Converted on 2023-06-02 using

ct2-transformers-converter --model HuggingFaceH4/starchat-alpha --output_dir /home/michael/tmp-ct2fast-starchat-alpha --force --copy_files merges.txt all_results.json training_args.bin tokenizer.json README.md dialogue_template.json tokenizer_config.json eval_results.json vocab.json TRAINER_README.md train_results.json generation_config.json trainer_state.json special_tokens_map.json added_tokens.json requirements.txt .gitattributes --quantization int8_float16 --trust_remote_code

Checkpoint compatible to ctranslate2>=3.14.0 and hf-hub-ctranslate2>=2.0.8

  • compute_type=int8_float16 for device="cuda"
  • compute_type=int8 for device="cpu"
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer

model_name = "michaelfeil/ct2fast-starchat-alpha"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
        # load in int8 on CUDA
        model_name_or_path=model_name,
        device="cuda",
        compute_type="int8_float16",
        # tokenizer=AutoTokenizer.from_pretrained("HuggingFaceH4/starchat-alpha")
)
outputs = model.generate(
    text=["def fibonnaci(", "User: How are you doing? Bot:"],
    max_length=64,
    include_prompt_in_result=False
)
print(outputs)

Licence and other remarks:

This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.

Original description

Model Card for StarChat Alpha

StarChat is a series of language models that are fine-tuned from StarCoder to act as helpful coding assistants. StarChat Alpha is the first of these models, and as an alpha release is only intended for educational or research purpopses. In particular, the model has not been aligned to human preferences with techniques like RLHF, so may generate problematic content (especially when prompted to do so).

Model Details
Model Description
Model Sources [optional]
Uses

StarChat Alpha is intended for educational and/or research purposes and in that respect can be used to probe the programming capabilities of open-source language models.

Bias, Risks, and Limitations

StarChat Alpha has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the StarCoder dataset which is derived from The Stack.

Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect. For example, it may produce code that does not compile or that produces incorrect results.
It may also produce code that is vulnerable to security exploits.
We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.

StarChat Alpha was fine-tuned from the base model StarCoder Base , please refer to its model card's Limitations Section for relevant information. In particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its technical report .

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import pipeline

pipe = pipeline("text-generation", model="HuggingFaceH4/starchat-alpha")
# Inputs use chat tokens 
inputs = "<|system|>\n<|end|>\n<|user|>How can I sort a list in Python?<|end|>\n<|assistant|>"
outputs = pipe(inputs)
Citation [optional]

BibTeX:

@article{Tunstall2023starchat-alpha,
  author = {Tunstall, Lewis and Lambert, Nathan and Rajani, Nazneen and Beeching, Edward and Le Scao, Teven and von Werra, Leandro and Han, Sheon and Schmid, Philipp and Rush, Alexander},
  title = {Creating a Coding Assistant with StarCoder},
  journal = {Hugging Face Blog},
  year = {2023},
  note = {https://huggingface.co/blog/starchat},
}

Runs of michaelfeil ct2fast-starchat-alpha on huggingface.co

4
Total runs
1
24-hour runs
1
3-day runs
2
7-day runs
-9
30-day runs

More Information About ct2fast-starchat-alpha huggingface.co Model

More ct2fast-starchat-alpha license Visit here:

https://choosealicense.com/licenses/bigcode-openrail-m

ct2fast-starchat-alpha huggingface.co

ct2fast-starchat-alpha huggingface.co is an AI model on huggingface.co that provides ct2fast-starchat-alpha's model effect (), which can be used instantly with this michaelfeil ct2fast-starchat-alpha model. huggingface.co supports a free trial of the ct2fast-starchat-alpha model, and also provides paid use of the ct2fast-starchat-alpha. Support call ct2fast-starchat-alpha model through api, including Node.js, Python, http.

ct2fast-starchat-alpha huggingface.co Url

https://huggingface.co/michaelfeil/ct2fast-starchat-alpha

michaelfeil ct2fast-starchat-alpha online free

ct2fast-starchat-alpha huggingface.co is an online trial and call api platform, which integrates ct2fast-starchat-alpha's modeling effects, including api services, and provides a free online trial of ct2fast-starchat-alpha, you can try ct2fast-starchat-alpha online for free by clicking the link below.

michaelfeil ct2fast-starchat-alpha online free url in huggingface.co:

https://huggingface.co/michaelfeil/ct2fast-starchat-alpha

ct2fast-starchat-alpha install

ct2fast-starchat-alpha is an open source model from GitHub that offers a free installation service, and any user can find ct2fast-starchat-alpha on GitHub to install. At the same time, huggingface.co provides the effect of ct2fast-starchat-alpha install, users can directly use ct2fast-starchat-alpha installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

ct2fast-starchat-alpha install url in huggingface.co:

https://huggingface.co/michaelfeil/ct2fast-starchat-alpha

Url of ct2fast-starchat-alpha

ct2fast-starchat-alpha huggingface.co Url

Provider of ct2fast-starchat-alpha huggingface.co

michaelfeil
ORGANIZATIONS

Other API from michaelfeil