stabilityai / japanese-stablelm-instruct-beta-70b

huggingface.co
Total runs: 1.2K
24-hour runs: 72
7-day runs: 354
30-day runs: 202
Model's Last Updated: 12월 19 2023
text-generation

Introduction of japanese-stablelm-instruct-beta-70b

Model Details of japanese-stablelm-instruct-beta-70b

Japanese-StableLM-Instruct-Beta-70B

A cute robot wearing a kimono writes calligraphy with one single brush

A cute robot wearing a kimono writes calligraphy with one single brush — Stable Diffusion XL

Model Description

japanese-stablelm-instruct-beta-70b is a 70B-parameter decoder-only language model based on japanese-stablelm-base-beta-70b and further fine tuned on Databricks Dolly-15k, Anthropic HH, and other public data.

This model is also available in a smaller 7b version , or a smaller and faster version with a specialized tokenizer .

Usage

First install additional dependencies in requirements.txt :

pip install -r requirements.txt

Then start generating text with japanese-stablelm-instruct-beta-70b by using the following code snippet:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "stabilityai/japanese-stablelm-instruct-beta-70b"
tokenizer = AutoTokenizer.from_pretrained(model_name)

# The next line may need to be modified depending on the environment
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")

def build_prompt(user_query, inputs):
    sys_msg = "<s>[INST] <<SYS>>\nあなたは役立つアシスタントです。\n<<SYS>>\n\n"
    p = sys_msg + user_query + "\n\n" + inputs + " [/INST] "
    return p

# Infer with prompt without any additional input
user_inputs = {
    "user_query": "与えられたことわざの意味を小学生でも分かるように教えてください。",
    "inputs": "情けは人のためならず"
}
prompt = build_prompt(**user_inputs)

input_ids = tokenizer.encode(
    prompt,
    add_special_tokens=True,
    return_tensors="pt"
)

# this is for reproducibility.
# feel free to change to get different result
seed = 23  
torch.manual_seed(seed)

tokens = model.generate(
    input_ids.to(device=model.device),
    max_new_tokens=128,
    temperature=0.99,
    top_p=0.95,
    do_sample=True,
)

out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)

We suggest playing with different generation config ( top_p , repetition_penalty etc) to find the best setup for your tasks. For example, use higher temperature for roleplay task, lower temperature for reasoning.

Model Details
Training Dataset

The following datasets were used for the instruction training. Note these are Japanese translated versions of the original datasets, shared by kunishou .

Use and Limitations
Intended Use

The model is intended to be used by all individuals as a foundation for application-specific fine-tuning without strict limitations on commercial use.

Limitations and bias

The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups.

Authors

This model was developed by the Research & Development team at Stability AI Japan, and the development was co-led by Takuya Akiba and Meng Lee . The members of the team are as follows:

Acknowledgements

We thank Meta Research for releasing Llama 2 under an open license for others to build on.

We are grateful for the contributions of the EleutherAI Polyglot-JA team in helping us to collect a large amount of pre-training data in Japanese. Polyglot-JA members includes Hyunwoong Ko (Project Lead), Fujiki Nakamura (originally started this project when he commited to the Polyglot team), Yunho Mo, Minji Jung, KeunSeok Im, and Su-Kyeong Jang.

We are also appreciative of AI Novelist/Sta (Bit192, Inc.) and the numerous contributors from Stable Community Japan for assisting us in gathering a large amount of high-quality Japanese textual data for model training.

Runs of stabilityai japanese-stablelm-instruct-beta-70b on huggingface.co

1.2K
Total runs
72
24-hour runs
190
3-day runs
354
7-day runs
202
30-day runs

More Information About japanese-stablelm-instruct-beta-70b huggingface.co Model

More japanese-stablelm-instruct-beta-70b license Visit here:

https://choosealicense.com/licenses/llama2

japanese-stablelm-instruct-beta-70b huggingface.co

japanese-stablelm-instruct-beta-70b huggingface.co is an AI model on huggingface.co that provides japanese-stablelm-instruct-beta-70b's model effect (), which can be used instantly with this stabilityai japanese-stablelm-instruct-beta-70b model. huggingface.co supports a free trial of the japanese-stablelm-instruct-beta-70b model, and also provides paid use of the japanese-stablelm-instruct-beta-70b. Support call japanese-stablelm-instruct-beta-70b model through api, including Node.js, Python, http.

japanese-stablelm-instruct-beta-70b huggingface.co Url

https://huggingface.co/stabilityai/japanese-stablelm-instruct-beta-70b

stabilityai japanese-stablelm-instruct-beta-70b online free

japanese-stablelm-instruct-beta-70b huggingface.co is an online trial and call api platform, which integrates japanese-stablelm-instruct-beta-70b's modeling effects, including api services, and provides a free online trial of japanese-stablelm-instruct-beta-70b, you can try japanese-stablelm-instruct-beta-70b online for free by clicking the link below.

stabilityai japanese-stablelm-instruct-beta-70b online free url in huggingface.co:

https://huggingface.co/stabilityai/japanese-stablelm-instruct-beta-70b

japanese-stablelm-instruct-beta-70b install

japanese-stablelm-instruct-beta-70b is an open source model from GitHub that offers a free installation service, and any user can find japanese-stablelm-instruct-beta-70b on GitHub to install. At the same time, huggingface.co provides the effect of japanese-stablelm-instruct-beta-70b install, users can directly use japanese-stablelm-instruct-beta-70b installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

japanese-stablelm-instruct-beta-70b install url in huggingface.co:

https://huggingface.co/stabilityai/japanese-stablelm-instruct-beta-70b

Url of japanese-stablelm-instruct-beta-70b

japanese-stablelm-instruct-beta-70b huggingface.co Url

Provider of japanese-stablelm-instruct-beta-70b huggingface.co

stabilityai
ORGANIZATIONS

Other API from stabilityai

huggingface.co

Total runs: 143.2K
Run Growth: 8.3K
Growth Rate: 5.81%
Updated: 8월 04 2023
huggingface.co

Total runs: 137.2K
Run Growth: 16.4K
Growth Rate: 11.93%
Updated: 7월 10 2024
huggingface.co

Total runs: 34.3K
Run Growth: 3.4K
Growth Rate: 10.04%
Updated: 8월 09 2024
huggingface.co

Total runs: 378
Run Growth: -97.9K
Growth Rate: -25898.41%
Updated: 8월 03 2024
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: 7월 10 2024
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: 4월 13 2024