from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-dutch-large")
# make example sentence
sentence = Sentence("George Washington ging naar Washington")
# predict NER tags
tagger.predict(sentence)
# print sentenceprint(sentence)
# print predicted NER spansprint('The following NER tags are found:')
# iterate over entities and printfor entity in sentence.get_spans('ner'):
print(entity)
This yields the following output:
Span [1,2]: "George Washington" [− Labels: PER (1.0)]
Span [5]: "Washington" [− Labels: LOC (1.0)]
So, the entities "
George Washington
" (labeled as a
person
) and "
Washington
" (labeled as a
location
) are found in the sentence "
George Washington ging naar Washington
".
Training: Script to train this model
The following Flair script was used to train this model:
import torch
# 1. get the corpusfrom flair.datasets import CONLL_03_DUTCH
corpus = CONLL_03_DUTCH()
# 2. what tag do we want to predict?
tag_type = 'ner'# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize fine-tuneable transformer embeddings WITH document contextfrom flair.embeddings import TransformerWordEmbeddings
embeddings = TransformerWordEmbeddings(
model='xlm-roberta-large',
layers="-1",
subtoken_pooling="first",
fine_tune=True,
use_context=True,
)
# 5. initialize bare-bones sequence tagger (no CRF, no RNN, no reprojection)from flair.models import SequenceTagger
tagger = SequenceTagger(
hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type='ner',
use_crf=False,
use_rnn=False,
reproject_embeddings=False,
)
# 6. initialize trainer with AdamW optimizerfrom flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW)
# 7. run training with XLM parameters (20 epochs, small LR)from torch.optim.lr_scheduler import OneCycleLR
trainer.train('resources/taggers/ner-dutch-large',
learning_rate=5.0e-6,
mini_batch_size=4,
mini_batch_chunk_size=1,
max_epochs=20,
scheduler=OneCycleLR,
embeddings_storage_mode='none',
weight_decay=0.,
)
)
Cite
Please cite the following paper when using this model.
@misc{schweter2020flert,
title={FLERT: Document-Level Features for Named Entity Recognition},
author={Stefan Schweter and Alan Akbik},
year={2020},
eprint={2011.06993},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
More Information About ner-dutch-large huggingface.co Model
ner-dutch-large huggingface.co
ner-dutch-large huggingface.co is an AI model on huggingface.co that provides ner-dutch-large's model effect (), which can be used instantly with this flair ner-dutch-large model. huggingface.co supports a free trial of the ner-dutch-large model, and also provides paid use of the ner-dutch-large. Support call ner-dutch-large model through api, including Node.js, Python, http.
ner-dutch-large huggingface.co is an online trial and call api platform, which integrates ner-dutch-large's modeling effects, including api services, and provides a free online trial of ner-dutch-large, you can try ner-dutch-large online for free by clicking the link below.
flair ner-dutch-large online free url in huggingface.co:
ner-dutch-large is an open source model from GitHub that offers a free installation service, and any user can find ner-dutch-large on GitHub to install. At the same time, huggingface.co provides the effect of ner-dutch-large install, users can directly use ner-dutch-large installed effect in huggingface.co for debugging and trial. It also supports api for free installation.