Yin et al.
proposed a method for using pre-trained NLI models as a ready-made zero-shot sequence classifiers. The method works by posing the sequence to be classified as the NLI premise and to construct a hypothesis from each candidate label. For example, if we want to evaluate whether a sequence belongs to the class "politics", we could construct a hypothesis of
This text is about politics.
. The probabilities for entailment and contradiction are then converted to label probabilities.
This method is surprisingly effective in many cases, particularly when used with larger pre-trained models like BART and Roberta. See
this blog post
for a more expansive introduction to this and other zero shot methods, and see the code snippets below for examples of using this model for zero-shot classification both with Hugging Face's built-in pipeline and with native Transformers/PyTorch code.
With the zero-shot classification pipeline
The model can be loaded with the
zero-shot-classification
pipeline like so:
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="facebook/bart-large-mnli")
You can then use this pipeline to classify sequences into any of the class names you specify.
sequence_to_classify = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(sequence_to_classify, candidate_labels)
#{'labels': ['travel', 'dancing', 'cooking'],# 'scores': [0.9938651323318481, 0.0032737774308770895, 0.002861034357920289],# 'sequence': 'one day I will see the world'}
If more than one candidate label can be correct, pass
multi_label=True
to calculate each class independently:
candidate_labels = ['travel', 'cooking', 'dancing', 'exploration']
classifier(sequence_to_classify, candidate_labels, multi_label=True)
#{'labels': ['travel', 'exploration', 'dancing', 'cooking'],# 'scores': [0.9945111274719238,# 0.9383890628814697,# 0.0057061901316046715,# 0.0018193122232332826],# 'sequence': 'one day I will see the world'}
With manual PyTorch
# pose sequence as a NLI premise and label as a hypothesisfrom transformers import AutoModelForSequenceClassification, AutoTokenizer
nli_model = AutoModelForSequenceClassification.from_pretrained('facebook/bart-large-mnli')
tokenizer = AutoTokenizer.from_pretrained('facebook/bart-large-mnli')
premise = sequence
hypothesis = f'This example is {label}.'# run through model pre-trained on MNLI
x = tokenizer.encode(premise, hypothesis, return_tensors='pt',
truncation_strategy='only_first')
logits = nli_model(x.to(device))[0]
# we throw away "neutral" (dim 1) and take the probability of# "entailment" (2) as the probability of the label being true
entail_contradiction_logits = logits[:,[0,2]]
probs = entail_contradiction_logits.softmax(dim=1)
prob_label_is_true = probs[:,1]
Runs of facebook bart-large-mnli on huggingface.co
3.2M
Total runs
20.6K
24-hour runs
41.5K
3-day runs
337.1K
7-day runs
-186.8K
30-day runs
More Information About bart-large-mnli huggingface.co Model
bart-large-mnli huggingface.co is an AI model on huggingface.co that provides bart-large-mnli's model effect (), which can be used instantly with this facebook bart-large-mnli model. huggingface.co supports a free trial of the bart-large-mnli model, and also provides paid use of the bart-large-mnli. Support call bart-large-mnli model through api, including Node.js, Python, http.
bart-large-mnli huggingface.co is an online trial and call api platform, which integrates bart-large-mnli's modeling effects, including api services, and provides a free online trial of bart-large-mnli, you can try bart-large-mnli online for free by clicking the link below.
facebook bart-large-mnli online free url in huggingface.co:
bart-large-mnli is an open source model from GitHub that offers a free installation service, and any user can find bart-large-mnli on GitHub to install. At the same time, huggingface.co provides the effect of bart-large-mnli install, users can directly use bart-large-mnli installed effect in huggingface.co for debugging and trial. It also supports api for free installation.