google-bert / bert-large-cased-whole-word-masking-finetuned-squad

huggingface.co
Total runs: 152.4K
24-hour runs: 2.3K
7-day runs: -1.0K
30-day runs: 27.7K
Model's Last Updated: February 19 2024
question-answering

Introduction of bert-large-cased-whole-word-masking-finetuned-squad

Model Details of bert-large-cased-whole-word-masking-finetuned-squad

BERT large model (cased) whole word masking finetuned on SQuAD

Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository . This model is cased: it makes a difference between english and English.

Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.

The training is identical -- each masked WordPiece token is predicted independently.

After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning.

Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team.

Model description

BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:

  • Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
  • Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not.

This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs.

This model has the following configuration:

  • 24-layer
  • 1024 hidden dimension
  • 16 attention heads
  • 336M parameters.
Intended uses & limitations

This model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the task summary of the transformers documentation.## Training data

The BERT model was pretrained on BookCorpus , a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers).

Training procedure
Preprocessing

The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form:

[CLS] Sentence A [SEP] Sentence B [SEP]

With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens.

The details of the masking procedure for each sentence are the following:

  • 15% of the tokens are masked.
  • In 80% of the cases, the masked tokens are replaced by [MASK] .
  • In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
  • In the 10% remaining cases, the masked tokens are left as is.
Pretraining

The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4, β 1 = 0.9 \beta_{1} = 0.9 and β 2 = 0.999 \beta_{2} = 0.999 , a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.

Fine-tuning

After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command:

python -m torch.distributed.launch --nproc_per_node=8 ./examples/question-answering/run_qa.py \
    --model_name_or_path bert-large-cased-whole-word-masking \
    --dataset_name squad \
    --do_train \
    --do_eval \
    --learning_rate 3e-5 \
    --num_train_epochs 2 \
    --max_seq_length 384 \
    --doc_stride 128 \
    --output_dir ./examples/models/wwm_cased_finetuned_squad/ \
    --per_device_eval_batch_size=3   \
    --per_device_train_batch_size=3   \
BibTeX entry and citation info
@article{DBLP:journals/corr/abs-1810-04805,
  author    = {Jacob Devlin and
               Ming{-}Wei Chang and
               Kenton Lee and
               Kristina Toutanova},
  title     = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
               Understanding},
  journal   = {CoRR},
  volume    = {abs/1810.04805},
  year      = {2018},
  url       = {http://arxiv.org/abs/1810.04805},
  archivePrefix = {arXiv},
  eprint    = {1810.04805},
  timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Runs of google-bert bert-large-cased-whole-word-masking-finetuned-squad on huggingface.co

152.4K
Total runs
2.3K
24-hour runs
3.1K
3-day runs
-1.0K
7-day runs
27.7K
30-day runs

More Information About bert-large-cased-whole-word-masking-finetuned-squad huggingface.co Model

More bert-large-cased-whole-word-masking-finetuned-squad license Visit here:

https://choosealicense.com/licenses/apache-2.0

bert-large-cased-whole-word-masking-finetuned-squad huggingface.co

bert-large-cased-whole-word-masking-finetuned-squad huggingface.co is an AI model on huggingface.co that provides bert-large-cased-whole-word-masking-finetuned-squad's model effect (), which can be used instantly with this google-bert bert-large-cased-whole-word-masking-finetuned-squad model. huggingface.co supports a free trial of the bert-large-cased-whole-word-masking-finetuned-squad model, and also provides paid use of the bert-large-cased-whole-word-masking-finetuned-squad. Support call bert-large-cased-whole-word-masking-finetuned-squad model through api, including Node.js, Python, http.

bert-large-cased-whole-word-masking-finetuned-squad huggingface.co Url

https://huggingface.co/google-bert/bert-large-cased-whole-word-masking-finetuned-squad

google-bert bert-large-cased-whole-word-masking-finetuned-squad online free

bert-large-cased-whole-word-masking-finetuned-squad huggingface.co is an online trial and call api platform, which integrates bert-large-cased-whole-word-masking-finetuned-squad's modeling effects, including api services, and provides a free online trial of bert-large-cased-whole-word-masking-finetuned-squad, you can try bert-large-cased-whole-word-masking-finetuned-squad online for free by clicking the link below.

google-bert bert-large-cased-whole-word-masking-finetuned-squad online free url in huggingface.co:

https://huggingface.co/google-bert/bert-large-cased-whole-word-masking-finetuned-squad

bert-large-cased-whole-word-masking-finetuned-squad install

bert-large-cased-whole-word-masking-finetuned-squad is an open source model from GitHub that offers a free installation service, and any user can find bert-large-cased-whole-word-masking-finetuned-squad on GitHub to install. At the same time, huggingface.co provides the effect of bert-large-cased-whole-word-masking-finetuned-squad install, users can directly use bert-large-cased-whole-word-masking-finetuned-squad installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

bert-large-cased-whole-word-masking-finetuned-squad install url in huggingface.co:

https://huggingface.co/google-bert/bert-large-cased-whole-word-masking-finetuned-squad

Url of bert-large-cased-whole-word-masking-finetuned-squad

bert-large-cased-whole-word-masking-finetuned-squad huggingface.co Url

Provider of bert-large-cased-whole-word-masking-finetuned-squad huggingface.co

google-bert
ORGANIZATIONS

Other API from google-bert