This model is a fine-tuned version of
distilroberta-base
on multiple combined datasets of rejections from different LLMs and normal responses from RLHF datasets.
It aims to identify rejections in LLMs when the prompt doesn't pass content moderation, classifying inputs into two categories:
0
for normal outputs and
1
for rejection detected.
It achieves the following results on the evaluation set:
It aims to identify rejection, classifying inputs into two categories:
0
for normal output and
1
for rejection detected.
The model's performance is dependent on the nature and quality of the training data. It might not perform well on text styles or topics not represented in the training set.
Additionally,
distilroberta-base
is case-sensitive model.
How to Get Started with the Model
Transformers
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
import torch
tokenizer = AutoTokenizer.from_pretrained("ProtectAI/distilroberta-base-rejection-v1")
model = AutoModelForSequenceClassification.from_pretrained("ProtectAI/distilroberta-base-rejection-v1")
classifier = pipeline(
"text-classification",
model=model,
tokenizer=tokenizer,
truncation=True,
max_length=512,
device=torch.device("cuda"if torch.cuda.is_available() else"cpu"),
)
print(classifier("Sorry, but I can't assist with that."))
Optimum with ONNX
Loading the model requires the
🤗 Optimum
library installed.
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("ProtectAI/distilroberta-base-rejection-v1", subfolder="onnx")
model = ORTModelForSequenceClassification.from_pretrained("ProtectAI/distilroberta-base-rejection-v1", export=False, subfolder="onnx")
classifier = pipeline(
task="text-classification",
model=model,
tokenizer=tokenizer,
truncation=True,
max_length=512,
)
print(classifier("Sorry, but I can't assist with that."))
Use in LLM Guard
NoRefusal Scanner
to detect if output was rejected, which can signal that something is going wrong with the prompt.
Training and evaluation data
The model was trained on a custom dataset from multiple open-source ones. We used ~10% rejections and ~90% of normal outputs.
We used the following papers when preparing the datasets:
The following hyperparameters were used during training:
learning_rate: 2e-05
train_batch_size: 16
eval_batch_size: 8
seed: 42
optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
lr_scheduler_type: linear
lr_scheduler_warmup_steps: 500
num_epochs: 3
Training results
Training Loss
Epoch
Step
Validation Loss
Accuracy
Recall
Precision
F1
0.0525
1.0
3536
0.0355
0.9912
0.9583
0.9675
0.9629
0.0219
2.0
7072
0.0312
0.9919
0.9917
0.9434
0.9669
0.0121
3.0
10608
0.0350
0.9939
0.9905
0.9596
0.9748
Framework versions
Transformers 4.36.2
Pytorch 2.1.2+cu121
Datasets 2.16.1
Tokenizers 0.15.0
Community
Join our Slack to give us feedback, connect with the maintainers and fellow users, ask questions,
get help for package usage or contributions, or engage in discussions about LLM security!
Citation
@misc{distilroberta-base-rejection-v1,
author = {ProtectAI.com},
title = {Fine-Tuned DistilRoberta-Base for Rejection in the output Detection},
year = {2024},
publisher = {HuggingFace},
url = {https://huggingface.co/ProtectAI/distilroberta-base-rejection-v1},
}
Runs of protectai distilroberta-base-rejection-v1 on huggingface.co
328.9K
Total runs
0
24-hour runs
1.9K
3-day runs
37.3K
7-day runs
272.3K
30-day runs
More Information About distilroberta-base-rejection-v1 huggingface.co Model
More distilroberta-base-rejection-v1 license Visit here:
distilroberta-base-rejection-v1 huggingface.co is an AI model on huggingface.co that provides distilroberta-base-rejection-v1's model effect (), which can be used instantly with this protectai distilroberta-base-rejection-v1 model. huggingface.co supports a free trial of the distilroberta-base-rejection-v1 model, and also provides paid use of the distilroberta-base-rejection-v1. Support call distilroberta-base-rejection-v1 model through api, including Node.js, Python, http.
distilroberta-base-rejection-v1 huggingface.co is an online trial and call api platform, which integrates distilroberta-base-rejection-v1's modeling effects, including api services, and provides a free online trial of distilroberta-base-rejection-v1, you can try distilroberta-base-rejection-v1 online for free by clicking the link below.
protectai distilroberta-base-rejection-v1 online free url in huggingface.co:
distilroberta-base-rejection-v1 is an open source model from GitHub that offers a free installation service, and any user can find distilroberta-base-rejection-v1 on GitHub to install. At the same time, huggingface.co provides the effect of distilroberta-base-rejection-v1 install, users can directly use distilroberta-base-rejection-v1 installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
distilroberta-base-rejection-v1 install url in huggingface.co: