stabilityai / japanese-stablelm-instruct-gamma-7b

huggingface.co
Total runs: 1.8K
24-hour runs: 0
7-day runs: 384
30-day runs: 1.1K
Model's Last Updated: January 24 2024
text-generation

Introduction of japanese-stablelm-instruct-gamma-7b

Model Details of japanese-stablelm-instruct-gamma-7b

Japanese Stable LM Instruct Gamma 7B

Model Description

This is a 7B-parameter decoder-only Japanese language model fine-tuned on instruction-following datasets, built on top of the base model Japanese Stable LM Base Gamma 7B .

If you are in search of a smaller model, please check Japanese StableLM-3B-4E1T Instruct .

Usage

Ensure you are using Transformers 4.34.0 or newer.

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("stabilityai/japanese-stablelm-instruct-gamma-7b")
model = AutoModelForCausalLM.from_pretrained(
  "stabilityai/japanese-stablelm-instruct-gamma-7b",
  torch_dtype="auto",
)
model.eval()

if torch.cuda.is_available():
    model = model.to("cuda")

def build_prompt(user_query, inputs="", sep="\n\n### "):
    sys_msg = "以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。"
    p = sys_msg
    roles = ["指示", "応答"]
    msgs = [": \n" + user_query, ": \n"]
    if inputs:
        roles.insert(1, "入力")
        msgs.insert(1, ": \n" + inputs)
    for role, msg in zip(roles, msgs):
        p += sep + role + msg
    return p

# Infer with prompt without any additional input
user_inputs = {
    "user_query": "与えられたことわざの意味を小学生でも分かるように教えてください。",
    "inputs": "情けは人のためならず"
}
prompt = build_prompt(**user_inputs)

input_ids = tokenizer.encode(
    prompt, 
    add_special_tokens=True, 
    return_tensors="pt"
)

tokens = model.generate(
    input_ids.to(device=model.device),
    max_new_tokens=256,
    temperature=1,
    top_p=0.95,
    do_sample=True,
)

out = tokenizer.decode(tokens[0][input_ids.shape[1]:], skip_special_tokens=True).strip()
print(out)
Model Details
Model Architecture

For details, please see Mistral AI's paper and release blog post .

Training Datasets
Use and Limitations
Intended Use

The model is intended to be used by all individuals as a foundational model for application-specific fine-tuning without strict limitations on commercial use.

Limitations and bias

The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model-generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups.

Credits

The fine-tuning was carried out by Fujiki Nakamura . Other aspects, including data preparation and evaluation, were handled by the Language Team of Stability AI Japan, notably Meng Lee , Makoto Shing , Paul McCann , Naoki Orii , and Takuya Akiba .

Acknowledgements

This model is based on Mistral-7B-v0.1 released by the Mistral AI team. We are grateful to the Mistral AI team for providing such an excellent base model.

We are grateful for the contributions of the EleutherAI Polyglot-JA team in helping us to collect a large amount of pre-training data in Japanese. Polyglot-JA members includes Hyunwoong Ko (Project Lead), Fujiki Nakamura (originally started this project when he commited to the Polyglot team), Yunho Mo, Minji Jung, KeunSeok Im, and Su-Kyeong Jang.

We are also appreciative of AI Novelist/Sta (Bit192, Inc.) and the numerous contributors from Stable Community Japan for assisting us in gathering a large amount of high-quality Japanese textual data for model training.

Runs of stabilityai japanese-stablelm-instruct-gamma-7b on huggingface.co

1.8K
Total runs
0
24-hour runs
152
3-day runs
384
7-day runs
1.1K
30-day runs

More Information About japanese-stablelm-instruct-gamma-7b huggingface.co Model

More japanese-stablelm-instruct-gamma-7b license Visit here:

https://choosealicense.com/licenses/apache-2.0

japanese-stablelm-instruct-gamma-7b huggingface.co

japanese-stablelm-instruct-gamma-7b huggingface.co is an AI model on huggingface.co that provides japanese-stablelm-instruct-gamma-7b's model effect (), which can be used instantly with this stabilityai japanese-stablelm-instruct-gamma-7b model. huggingface.co supports a free trial of the japanese-stablelm-instruct-gamma-7b model, and also provides paid use of the japanese-stablelm-instruct-gamma-7b. Support call japanese-stablelm-instruct-gamma-7b model through api, including Node.js, Python, http.

japanese-stablelm-instruct-gamma-7b huggingface.co Url

https://huggingface.co/stabilityai/japanese-stablelm-instruct-gamma-7b

stabilityai japanese-stablelm-instruct-gamma-7b online free

japanese-stablelm-instruct-gamma-7b huggingface.co is an online trial and call api platform, which integrates japanese-stablelm-instruct-gamma-7b's modeling effects, including api services, and provides a free online trial of japanese-stablelm-instruct-gamma-7b, you can try japanese-stablelm-instruct-gamma-7b online for free by clicking the link below.

stabilityai japanese-stablelm-instruct-gamma-7b online free url in huggingface.co:

https://huggingface.co/stabilityai/japanese-stablelm-instruct-gamma-7b

japanese-stablelm-instruct-gamma-7b install

japanese-stablelm-instruct-gamma-7b is an open source model from GitHub that offers a free installation service, and any user can find japanese-stablelm-instruct-gamma-7b on GitHub to install. At the same time, huggingface.co provides the effect of japanese-stablelm-instruct-gamma-7b install, users can directly use japanese-stablelm-instruct-gamma-7b installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

japanese-stablelm-instruct-gamma-7b install url in huggingface.co:

https://huggingface.co/stabilityai/japanese-stablelm-instruct-gamma-7b

Url of japanese-stablelm-instruct-gamma-7b

japanese-stablelm-instruct-gamma-7b huggingface.co Url

Provider of japanese-stablelm-instruct-gamma-7b huggingface.co

stabilityai
ORGANIZATIONS

Other API from stabilityai

huggingface.co

Total runs: 206.5K
Run Growth: 85.8K
Growth Rate: 44.00%
Updated: August 04 2023
huggingface.co

Total runs: 117.8K
Run Growth: -24.0K
Growth Rate: -20.58%
Updated: July 10 2024
huggingface.co

Total runs: 31.6K
Run Growth: 4.3K
Growth Rate: 13.44%
Updated: August 09 2024
huggingface.co

Total runs: 718
Run Growth: 71
Growth Rate: 17.62%
Updated: August 03 2024
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: July 10 2024
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: April 14 2024