protectai / unbiased-toxic-roberta-onnx

huggingface.co
Total runs: 39.0K
24-hour runs: 0
7-day runs: 9.0K
30-day runs: 28.3K
Model's Last Updated: Marzo 25 2024
token-classification

Introduction of unbiased-toxic-roberta-onnx

Model Details of unbiased-toxic-roberta-onnx

ONNX version of unitary/unbiased-toxic-roberta

This model is a conversion of unitary/unbiased-toxic-roberta to ONNX format using the 🤗 Optimum library.

Trained models & code to predict toxic comments on 3 Jigsaw challenges: Toxic comment classification, Unintended Bias in Toxic comments, Multilingual toxic comment classification.

Built by Laura Hanu at Unitary .

⚠️ Disclaimer: The huggingface models currently give different results to the detoxify library (see issue here ).

Labels

All challenges have a toxicity label. The toxicity labels represent the aggregate ratings of up to 10 annotators according the following schema:

  • Very Toxic (a very hateful, aggressive, or disrespectful comment that is very likely to make you leave a discussion or give up on sharing your perspective)
  • Toxic (a rude, disrespectful, or unreasonable comment that is somewhat likely to make you leave a discussion or give up on sharing your perspective)
  • Hard to Say
  • Not Toxic

More information about the labelling schema can be found here .

Toxic Comment Classification Challenge

This challenge includes the following labels:

  • toxic
  • severe_toxic
  • obscene
  • threat
  • insult
  • identity_hate
Jigsaw Unintended Bias in Toxicity Classification

This challenge has 2 types of labels: the main toxicity labels and some additional identity labels that represent the identities mentioned in the comments.

Only identities with more than 500 examples in the test set (combined public and private) are included during training as additional labels and in the evaluation calculation.

  • toxicity
  • severe_toxicity
  • obscene
  • threat
  • insult
  • identity_attack
  • sexual_explicit

Identity labels used:

  • male
  • female
  • homosexual_gay_or_lesbian
  • christian
  • jewish
  • muslim
  • black
  • white
  • psychiatric_or_mental_illness

A complete list of all the identity labels available can be found here .

Usage
Optimum

Loading the model requires the 🤗 Optimum library installed.

from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer, pipeline


tokenizer = AutoTokenizer.from_pretrained("laiyer/unbiased-toxic-roberta-onnx")
model = ORTModelForSequenceClassification.from_pretrained("laiyer/unbiased-toxic-roberta-onnx")
classifier = pipeline(
    task="text-classification",
    model=model,
    tokenizer=tokenizer,
)

classifier_output = ner("It's not toxic comment")
print(classifier_output)
LLM Guard

Toxicity scanner

Community

Join our Slack to give us feedback, connect with the maintainers and fellow users, ask questions, or engage in discussions about LLM security!

Runs of protectai unbiased-toxic-roberta-onnx on huggingface.co

39.0K
Total runs
0
24-hour runs
4.3K
3-day runs
9.0K
7-day runs
28.3K
30-day runs

More Information About unbiased-toxic-roberta-onnx huggingface.co Model

More unbiased-toxic-roberta-onnx license Visit here:

https://choosealicense.com/licenses/apache-2.0

unbiased-toxic-roberta-onnx huggingface.co

unbiased-toxic-roberta-onnx huggingface.co is an AI model on huggingface.co that provides unbiased-toxic-roberta-onnx's model effect (), which can be used instantly with this protectai unbiased-toxic-roberta-onnx model. huggingface.co supports a free trial of the unbiased-toxic-roberta-onnx model, and also provides paid use of the unbiased-toxic-roberta-onnx. Support call unbiased-toxic-roberta-onnx model through api, including Node.js, Python, http.

unbiased-toxic-roberta-onnx huggingface.co Url

https://huggingface.co/protectai/unbiased-toxic-roberta-onnx

protectai unbiased-toxic-roberta-onnx online free

unbiased-toxic-roberta-onnx huggingface.co is an online trial and call api platform, which integrates unbiased-toxic-roberta-onnx's modeling effects, including api services, and provides a free online trial of unbiased-toxic-roberta-onnx, you can try unbiased-toxic-roberta-onnx online for free by clicking the link below.

protectai unbiased-toxic-roberta-onnx online free url in huggingface.co:

https://huggingface.co/protectai/unbiased-toxic-roberta-onnx

unbiased-toxic-roberta-onnx install

unbiased-toxic-roberta-onnx is an open source model from GitHub that offers a free installation service, and any user can find unbiased-toxic-roberta-onnx on GitHub to install. At the same time, huggingface.co provides the effect of unbiased-toxic-roberta-onnx install, users can directly use unbiased-toxic-roberta-onnx installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

unbiased-toxic-roberta-onnx install url in huggingface.co:

https://huggingface.co/protectai/unbiased-toxic-roberta-onnx

Url of unbiased-toxic-roberta-onnx

unbiased-toxic-roberta-onnx huggingface.co Url

Provider of unbiased-toxic-roberta-onnx huggingface.co

protectai
ORGANIZATIONS

Other API from protectai