distilbert-NER
is the fine-tuned version of
DistilBERT
, which is a distilled variant of the BERT model. DistilBERT has fewer parameters than BERT, making it smaller, faster, and more efficient. distilbert-NER is specifically fine-tuned for the task of
Named Entity Recognition (NER)
.
This model accurately identifies the same four types of entities as its BERT counterparts: location (LOC), organizations (ORG), person (PER), and Miscellaneous (MISC). Although it is a more compact model, distilbert-NER demonstrates a robust performance in NER tasks, balancing between size, speed, and accuracy.
The model was fine-tuned on the English version of the
CoNLL-2003 Named Entity Recognition
dataset, which is widely recognized for its comprehensive and diverse range of entity types.
Fine-tuned bert-base, available in both cased and uncased versions
110M
Intended uses & limitations
How to use
This model can be utilized with the Transformers
pipeline
for NER, similar to the BERT models.
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("dslim/distilbert-NER")
model = AutoModelForTokenClassification.from_pretrained("dslim/distilbert-NER")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Wolfgang and I live in Berlin"
ner_results = nlp(example)
print(ner_results)
Limitations and bias
The performance of distilbert-NER is linked to its training on the CoNLL-2003 dataset. Therefore, it might show limited effectiveness on text data that significantly differs from this training set. Users should be aware of potential biases inherent in the training data and the possibility of entity misclassification in complex sentences.
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation
Description
O
Outside of a named entity
B-MISC
Beginning of a miscellaneous entity right after another miscellaneous entity
I-MISC
Miscellaneous entity
B-PER
Beginning of a person’s name right after another person’s name
I-PER
Person’s name
B-ORG
Beginning of an organization right after another organization
I-ORG
organization
B-LOC
Beginning of a location right after another location
I-LOC
Location
CoNLL-2003 English Dataset Statistics
This dataset was derived from the Reuters corpus which consists of Reuters news stories. You can read more about how this dataset was created in the CoNLL-2003 paper.
# of training examples per entity type
Dataset
LOC
MISC
ORG
PER
Train
7140
3438
6321
6600
Dev
1837
922
1341
1842
Test
1668
702
1661
1617
# of articles/sentences/tokens per dataset
Dataset
Articles
Sentences
Tokens
Train
946
14,987
203,621
Dev
216
3,466
51,362
Test
231
3,684
46,435
Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the
original BERT paper
which trained & evaluated the model on CoNLL-2003 NER task.
Eval results
Metric
Score
Loss
0.0710
Precision
0.9202
Recall
0.9232
F1
0.9217
Accuracy
0.9810
The training and validation losses demonstrate a decrease over epochs, signaling effective learning. The precision, recall, and F1 scores are competitive, showcasing the model's robustness in NER tasks.
BibTeX entry and citation info
For DistilBERT:
@article{sanh2019distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas},
journal={arXiv preprint arXiv:1910.01108},
year={2019}
}
For the underlying BERT model:
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {db
lp computer science bibliography, https://dblp.org}
}
Runs of dslim distilbert-NER on huggingface.co
15.0K
Total runs
231
24-hour runs
441
3-day runs
1.1K
7-day runs
-2.1K
30-day runs
More Information About distilbert-NER huggingface.co Model
distilbert-NER huggingface.co is an AI model on huggingface.co that provides distilbert-NER's model effect (), which can be used instantly with this dslim distilbert-NER model. huggingface.co supports a free trial of the distilbert-NER model, and also provides paid use of the distilbert-NER. Support call distilbert-NER model through api, including Node.js, Python, http.
distilbert-NER huggingface.co is an online trial and call api platform, which integrates distilbert-NER's modeling effects, including api services, and provides a free online trial of distilbert-NER, you can try distilbert-NER online for free by clicking the link below.
dslim distilbert-NER online free url in huggingface.co:
distilbert-NER is an open source model from GitHub that offers a free installation service, and any user can find distilbert-NER on GitHub to install. At the same time, huggingface.co provides the effect of distilbert-NER install, users can directly use distilbert-NER installed effect in huggingface.co for debugging and trial. It also supports api for free installation.