The model corrects spelling errors and typos by bringing all words in the text to the standard English language.
The proofreader was trained based on the
T5-large
model.
An extensive dataset with “artificial” errors was taken as a training corpus: the corpus was assembled on the basis of the English-language Wikipedia and News blogs, then typos and spelling errors were automatically introduced into it using the functionality of the
SAGE library
.
Th festeivаl was excelzecnt in many ways, and in particular it beinganinternational festjival sss a chаllenging, bet brilli an t ea.
The festival was excellent in many ways, and in particular it beinganinternational festival is a challenging, but brilliant one to see.
That 's why I believe in the solution which is the closest to human nature and can help us to avoid boredome. I am sure that eventually we will take off our clothes and in the future we will be undressed and free. There wo n't be any problem with being up - do - date .
That's why I believe in the solution which is the closest to human nature and can help us to avoid boredom. I am sure that eventually we will take off our clothes and in the future we will be undressed and free. There won't be any problem with being up - do - date.
If you bought something goregous, you well be very happy.
If you bought something gorgeous, you will be very happy.
Metrics
Quality
Below are automatic metrics for determining the correctness of the spell checkers.
We present a comparison of our solution both with open automatic spell checkers and with the ChatGPT family of models on two available datasets:
BEA60K
: English spelling errors collected from several domains;
JFLEG
: 1601 sentences in English, which contain about 2 thousand spelling errors;
from transformers import T5ForConditionalGeneration, AutoTokenizer
path_to_model = "ai-forever/T5-large-spell"
model = T5ForConditionalGeneration.from_pretrained(path_to_model)
tokenizer = AutoTokenizer.from_pretrained(path_to_model)
prefix = "grammar: "
sentence = "If you bought something goregous, you well be very happy."
sentence = prefix + sentence
encodings = tokenizer(sentence, return_tensors="pt")
generated_tokens = model.generate(**encodings)
answer = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(answer)
# ["If you bought something gorgeous, you will be very happy."]
The
T5-large
model, on which our solution is based, and its source code are supplied under the APACHE-2.0 license.
Our solution is supplied under MIT license.
T5-large-spell huggingface.co is an AI model on huggingface.co that provides T5-large-spell's model effect (), which can be used instantly with this ai-forever T5-large-spell model. huggingface.co supports a free trial of the T5-large-spell model, and also provides paid use of the T5-large-spell. Support call T5-large-spell model through api, including Node.js, Python, http.
T5-large-spell huggingface.co is an online trial and call api platform, which integrates T5-large-spell's modeling effects, including api services, and provides a free online trial of T5-large-spell, you can try T5-large-spell online for free by clicking the link below.
ai-forever T5-large-spell online free url in huggingface.co:
T5-large-spell is an open source model from GitHub that offers a free installation service, and any user can find T5-large-spell on GitHub to install. At the same time, huggingface.co provides the effect of T5-large-spell install, users can directly use T5-large-spell installed effect in huggingface.co for debugging and trial. It also supports api for free installation.