from textgen import GptModel
defgenerate_prompt(instruction):
returnf"""Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:{instruction}\n\n### Response: """
ziya_model_dir = ""# ziya模型合并后的路径
model = GptModel("llama", ziya_model_dir, peft_name="shibing624/ziya-llama-13b-medical-lora")
predict_sentence = generate_prompt("一岁宝宝发烧能吃啥药?")
r = model.predict([predict_sentence])
print(r) # ["1、首先大多数小儿退热药中含有解热镇痛成分阿司匹林或布洛芬等,这类药品虽然副作用较少..."]
Usage (HuggingFace Transformers)
Without
textgen
, you can use the model like this:
First, you pass your input through the transformer model, then you get the generated sentence.
Install package:
pip install transformers
import sys
from peft import PeftModel
from transformers import LlamaForCausalLM, LlamaTokenizer
ziya_model_dir = ""# ziya模型合并后的路径
model = LlamaForCausalLM.from_pretrained(ziya_model_dir, device_map='auto')
tokenizer = LlamaTokenizer.from_pretrained(ziya_model_dir)
model = PeftModel.from_pretrained(model, "shibing624/ziya-llama-13b-medical-lora")
device = "cuda"if torch.cuda.is_available() else"cpu"defgenerate_prompt(instruction):
returnf"""Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:{instruction}\n\n### Response: """
sents = ['一岁宝宝发烧能吃啥药', "who are you?"]
for s in sents:
q = generate_prompt(s)
inputs = tokenizer(q, return_tensors="pt")
inputs = inputs.to(device=device)
generate_ids = model.generate(
**inputs,
max_new_tokens=120,
do_sample=True,
top_p=0.85,
temperature=1.0,
repetition_penalty=1.0
)
output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True)[0]
print(output)
print()
@software{textgen,
author = {Ming Xu},
title = {textgen: Implementation of language model finetune},
year = {2023},
url = {https://github.com/shibing624/textgen},
}
Runs of shibing624 ziya-llama-13b-medical-lora on huggingface.co
0
Total runs
0
24-hour runs
0
3-day runs
0
7-day runs
0
30-day runs
More Information About ziya-llama-13b-medical-lora huggingface.co Model
More ziya-llama-13b-medical-lora license Visit here:
ziya-llama-13b-medical-lora huggingface.co is an AI model on huggingface.co that provides ziya-llama-13b-medical-lora's model effect (), which can be used instantly with this shibing624 ziya-llama-13b-medical-lora model. huggingface.co supports a free trial of the ziya-llama-13b-medical-lora model, and also provides paid use of the ziya-llama-13b-medical-lora. Support call ziya-llama-13b-medical-lora model through api, including Node.js, Python, http.
ziya-llama-13b-medical-lora huggingface.co is an online trial and call api platform, which integrates ziya-llama-13b-medical-lora's modeling effects, including api services, and provides a free online trial of ziya-llama-13b-medical-lora, you can try ziya-llama-13b-medical-lora online for free by clicking the link below.
shibing624 ziya-llama-13b-medical-lora online free url in huggingface.co:
ziya-llama-13b-medical-lora is an open source model from GitHub that offers a free installation service, and any user can find ziya-llama-13b-medical-lora on GitHub to install. At the same time, huggingface.co provides the effect of ziya-llama-13b-medical-lora install, users can directly use ziya-llama-13b-medical-lora installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
ziya-llama-13b-medical-lora install url in huggingface.co: