from pycorrector import GptCorrector
model = GptCorrector("THUDM/chatglm3-6b", "chatglm", peft_name="shibing624/chatglm3-6b-csc-chinese-lora")
r = model.correct_batch(["少先队员因该为老人让坐。"])
print(r) # ['少先队员应该为老人让座。']
Usage (HuggingFace Transformers)
Without
pycorrector
, you can use the model like this:
First, you pass your input through the transformer model, then you get the generated sentence.
Install package:
pip install transformers
import os
import torch
from peft import PeftModel
from transformers import AutoTokenizer, AutoModel
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm3-6b", trust_remote_code=True)
model = AutoModel.from_pretrained("THUDM/chatglm3-6b", trust_remote_code=True).half().cuda()
model = PeftModel.from_pretrained(model, "shibing624/chatglm3-6b-csc-chinese-lora")
sents = ['对下面文本纠错\n\n少先队员因该为老人让坐。',
'对下面文本纠错\n\n下个星期,我跟我朋唷打算去法国玩儿。']
defget_prompt(user_query):
vicuna_prompt = "A chat between a curious user and an artificial intelligence assistant. " \
"The assistant gives helpful, detailed, and polite answers to the user's questions. " \
"USER: {query} ASSISTANT:"return vicuna_prompt.format(query=user_query)
for s in sents:
q = get_prompt(s)
input_ids = tokenizer(q).input_ids
generation_kwargs = dict(max_new_tokens=128, do_sample=True, temperature=0.8)
outputs = model.generate(input_ids=torch.as_tensor([input_ids]).to('cuda:0'), **generation_kwargs)
output_tensor = outputs[0][len(input_ids):]
response = tokenizer.decode(output_tensor, skip_special_tokens=True)
print(response)
chatglm3-6b-csc-chinese-lora huggingface.co is an AI model on huggingface.co that provides chatglm3-6b-csc-chinese-lora's model effect (), which can be used instantly with this shibing624 chatglm3-6b-csc-chinese-lora model. huggingface.co supports a free trial of the chatglm3-6b-csc-chinese-lora model, and also provides paid use of the chatglm3-6b-csc-chinese-lora. Support call chatglm3-6b-csc-chinese-lora model through api, including Node.js, Python, http.
chatglm3-6b-csc-chinese-lora huggingface.co is an online trial and call api platform, which integrates chatglm3-6b-csc-chinese-lora's modeling effects, including api services, and provides a free online trial of chatglm3-6b-csc-chinese-lora, you can try chatglm3-6b-csc-chinese-lora online for free by clicking the link below.
shibing624 chatglm3-6b-csc-chinese-lora online free url in huggingface.co:
chatglm3-6b-csc-chinese-lora is an open source model from GitHub that offers a free installation service, and any user can find chatglm3-6b-csc-chinese-lora on GitHub to install. At the same time, huggingface.co provides the effect of chatglm3-6b-csc-chinese-lora install, users can directly use chatglm3-6b-csc-chinese-lora installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
chatglm3-6b-csc-chinese-lora install url in huggingface.co: