stabilityai / stable-code-instruct-3b

huggingface.co
Total runs: 1.5K
24-hour runs: -49
7-day runs: -103
30-day runs: -529
Model's Last Updated: Tháng bảy 10 2024
text-generation

Introduction of stable-code-instruct-3b

Model Details of stable-code-instruct-3b

Stable Code Instruct 3B

Try it out here: https://huggingface.co/spaces/stabilityai/stable-code-instruct-3b

image/png

Model Description

stable-code-instruct-3b is a 2.7B billion parameter decoder-only language model tuned from stable-code-3b . This model was trained on a mix of publicly available datasets, synthetic datasets using Direct Preference Optimization (DPO) .

This instruct tune demonstrates state-of-the-art performance (compared to models of similar size) on the MultiPL-E metrics across multiple programming languages tested using BigCode's Evaluation Harness , and on the code portions of MT Bench . The model is finetuned to make it useable in tasks like,

  • General purpose Code/Software Engineering like conversations.
  • SQL related generation and conversation.

Please note: For commercial use, please refer to https://stability.ai/license .

Usage

Here's how you can run the model use the model:


import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stable-code-instruct-3b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("stabilityai/stable-code-instruct-3b", torch_dtype=torch.bfloat16, trust_remote_code=True)
model.eval()
model = model.cuda()

messages = [
    {
        "role": "system",
        "content": "You are a helpful and polite assistant",
    },
    {
        "role": "user",
        "content": "Write a simple website in HTML. When a user clicks the button, it shows a random joke from a list of 4 jokes."
    },
]

prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)

inputs = tokenizer([prompt], return_tensors="pt").to(model.device)

tokens = model.generate(
    **inputs,
    max_new_tokens=1024,
    temperature=0.5,
    top_p=0.95,
    top_k=100,
    do_sample=True,
    use_cache=True
)

output = tokenizer.batch_decode(tokens[:, inputs.input_ids.shape[-1]:], skip_special_tokens=False)[0]
Model Details
Performance
Multi-PL Benchmark:
Model Size Avg Python C++ JavaScript Java PHP Rust
Codellama Instruct 7B 0.30 0.33 0.31 0.31 0.29 0.31 0.25
Deepseek Instruct 1.3B 0.44 0.52 0.52 0.41 0.46 0.45 0.28
Stable Code Instruct (SFT) 3B 0.44 0.55 0.45 0.42 0.42 0.44 0.32
Stable Code Instruct (DPO) 3B 0.47 0.59 0.49 0.49 0.44 0.45 0.37
MT-Bench Coding:
Model Size Score
DeepSeek Coder 1.3B 4.6
Stable Code Instruct (DPO) 3B 5.8 (ours)
Stable Code Instruct (SFT) 3B 5.5
DeepSeek Coder 6.7B 6.9
CodeLlama Instruct 7B 3.55
StarChat2 15B 5.7
SQL Performance
Model Size Date Group By Order By Ratio Join Where
Stable Code Instruct (DPO) 3B 24.0% 54.2% 68.5% 40.0% 54.2% 42.8%
DeepSeek-Coder Instruct 1.3B 24.0% 37.1% 51.4% 34.3% 45.7% 45.7%
SQLCoder 7B 64.0% 82.9% 74.3% 54.3% 74.3% 74.3%
How to Cite
@misc{stable-code-instruct-3b,
      url={[https://huggingface.co/stabilityai/stable-code-3b](https://huggingface.co/stabilityai/stable-code-instruct-3b)},
      title={Stable Code 3B},
      author={Phung, Duy, and Pinnaparaju, Nikhil and Adithyan, Reshinth and Zhuravinskyi, Maksym and Tow, Jonathan and Cooper, Nathan}
}

Runs of stabilityai stable-code-instruct-3b on huggingface.co

1.5K
Total runs
-49
24-hour runs
-86
3-day runs
-103
7-day runs
-529
30-day runs

More Information About stable-code-instruct-3b huggingface.co Model

More stable-code-instruct-3b license Visit here:

https://choosealicense.com/licenses/other

stable-code-instruct-3b huggingface.co

stable-code-instruct-3b huggingface.co is an AI model on huggingface.co that provides stable-code-instruct-3b's model effect (), which can be used instantly with this stabilityai stable-code-instruct-3b model. huggingface.co supports a free trial of the stable-code-instruct-3b model, and also provides paid use of the stable-code-instruct-3b. Support call stable-code-instruct-3b model through api, including Node.js, Python, http.

stable-code-instruct-3b huggingface.co Url

https://huggingface.co/stabilityai/stable-code-instruct-3b

stabilityai stable-code-instruct-3b online free

stable-code-instruct-3b huggingface.co is an online trial and call api platform, which integrates stable-code-instruct-3b's modeling effects, including api services, and provides a free online trial of stable-code-instruct-3b, you can try stable-code-instruct-3b online for free by clicking the link below.

stabilityai stable-code-instruct-3b online free url in huggingface.co:

https://huggingface.co/stabilityai/stable-code-instruct-3b

stable-code-instruct-3b install

stable-code-instruct-3b is an open source model from GitHub that offers a free installation service, and any user can find stable-code-instruct-3b on GitHub to install. At the same time, huggingface.co provides the effect of stable-code-instruct-3b install, users can directly use stable-code-instruct-3b installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

stable-code-instruct-3b install url in huggingface.co:

https://huggingface.co/stabilityai/stable-code-instruct-3b

Url of stable-code-instruct-3b

stable-code-instruct-3b huggingface.co Url

Provider of stable-code-instruct-3b huggingface.co

stabilityai
ORGANIZATIONS

Other API from stabilityai

huggingface.co

Total runs: 433.6K
Run Growth: 3.8K
Growth Rate: 0.92%
Updated: Tháng bảy 10 2024
huggingface.co

Total runs: 149.2K
Run Growth: 10.6K
Growth Rate: 7.42%
Updated: Tháng tám 04 2023
huggingface.co

Total runs: 131.9K
Run Growth: 12.0K
Growth Rate: 8.77%
Updated: Tháng bảy 10 2024
huggingface.co

Total runs: 34.3K
Run Growth: 3.3K
Growth Rate: 9.68%
Updated: Tháng tám 09 2024
huggingface.co

Total runs: 378
Run Growth: -97.9K
Growth Rate: -25899.47%
Updated: Tháng tám 03 2024
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: Tháng bảy 10 2024
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: Tháng tư 13 2024