⚠️ Disclaimer:
The huggingface models currently give different results to the detoxify library (see issue
here
).
Labels
All challenges have a toxicity label. The toxicity labels represent the aggregate ratings of up to 10 annotators according the following schema:
Very Toxic
(a very hateful, aggressive, or disrespectful comment that is very likely to make you leave a discussion or give up on sharing your perspective)
Toxic
(a rude, disrespectful, or unreasonable comment that is somewhat likely to make you leave a discussion or give up on sharing your perspective)
Hard to Say
Not Toxic
More information about the labelling schema can be found
here
.
Toxic Comment Classification Challenge
This challenge includes the following labels:
toxic
severe_toxic
obscene
threat
insult
identity_hate
Jigsaw Unintended Bias in Toxicity Classification
This challenge has 2 types of labels: the main toxicity labels and some additional identity labels that represent the identities mentioned in the comments.
Only identities with more than 500 examples in the test set (combined public and private) are included during training as additional labels and in the evaluation calculation.
toxicity
severe_toxicity
obscene
threat
insult
identity_attack
sexual_explicit
Identity labels used:
male
female
homosexual_gay_or_lesbian
christian
jewish
muslim
black
white
psychiatric_or_mental_illness
A complete list of all the identity labels available can be found
here
.
Usage
Optimum
Loading the model requires the
🤗 Optimum
library installed.
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("laiyer/unbiased-toxic-roberta-onnx")
model = ORTModelForSequenceClassification.from_pretrained("laiyer/unbiased-toxic-roberta-onnx")
classifier = pipeline(
task="text-classification",
model=model,
tokenizer=tokenizer,
)
classifier_output = ner("It's not toxic comment")
print(classifier_output)
unbiased-toxic-roberta-onnx huggingface.co is an AI model on huggingface.co that provides unbiased-toxic-roberta-onnx's model effect (), which can be used instantly with this protectai unbiased-toxic-roberta-onnx model. huggingface.co supports a free trial of the unbiased-toxic-roberta-onnx model, and also provides paid use of the unbiased-toxic-roberta-onnx. Support call unbiased-toxic-roberta-onnx model through api, including Node.js, Python, http.
unbiased-toxic-roberta-onnx huggingface.co is an online trial and call api platform, which integrates unbiased-toxic-roberta-onnx's modeling effects, including api services, and provides a free online trial of unbiased-toxic-roberta-onnx, you can try unbiased-toxic-roberta-onnx online for free by clicking the link below.
protectai unbiased-toxic-roberta-onnx online free url in huggingface.co:
unbiased-toxic-roberta-onnx is an open source model from GitHub that offers a free installation service, and any user can find unbiased-toxic-roberta-onnx on GitHub to install. At the same time, huggingface.co provides the effect of unbiased-toxic-roberta-onnx install, users can directly use unbiased-toxic-roberta-onnx installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
unbiased-toxic-roberta-onnx install url in huggingface.co: