NorahAlshahrani / Adv_BERT_Hard

huggingface.co
Total runs: 186
24-hour runs: 0
7-day runs: 178
30-day runs: 176
Model's Last Updated: 4月 02 2024
text-classification

Introduction of Adv_BERT_Hard

Model Details of Adv_BERT_Hard

Adv_BERT_Hard

This model is an adversarially fine-tuned version of aubmindlab/bert-base-arabertv2 on the Hotel Arabic Reviews Dataset (HARD) dataset, augmented by our generated adversarial examples. It achieves the following results on

  • Loss: 0.4285
  • Accuracy: 0.8267
BibTeX Citations:
@inproceedings{alshahrani-etal-2024-arabic,
    title = "{{A}rabic Synonym {BERT}-based Adversarial Examples for Text Classification}",
    author = "Alshahrani, Norah  and
      Alshahrani, Saied  and
      Wali, Esma  and
      Matthews, Jeanna",
    editor = "Falk, Neele  and
      Papi, Sara  and
      Zhang, Mike",
    booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop",
    month = mar,
    year = "2024",
    address = "St. Julian{'}s, Malta",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.eacl-srw.10",
    pages = "137--147",
    abstract = "Text classification systems have been proven vulnerable to adversarial text examples, modified versions of the original text examples that are often unnoticed by human eyes, yet can force text classification models to alter their classification. Often, research works quantifying the impact of adversarial text attacks have been applied only to models trained in English. In this paper, we introduce the first word-level study of adversarial attacks in Arabic. Specifically, we use a synonym (word-level) attack using a Masked Language Modeling (MLM) task with a BERT model in a black-box setting to assess the robustness of the state-of-the-art text classification models to adversarial attacks in Arabic. To evaluate the grammatical and semantic similarities of the newly produced adversarial examples using our synonym BERT-based attack, we invite four human evaluators to assess and compare the produced adversarial examples with their original examples. We also study the transferability of these newly produced Arabic adversarial examples to various models and investigate the effectiveness of defense mechanisms against these adversarial examples on the BERT models. We find that fine-tuned BERT models were more susceptible to our synonym attacks than the other Deep Neural Networks (DNN) models like WordCNN and WordLSTM we trained. We also find that fine-tuned BERT models were more susceptible to transferred attacks. We, lastly, find that fine-tuned BERT models successfully regain at least 2{\%} in accuracy after applying adversarial training as an initial defense mechanism.",
}
@misc{alshahrani2024arabic,
      title={{Arabic Synonym BERT-based Adversarial Examples for Text Classification}}, 
      author={Norah Alshahrani and Saied Alshahrani and Esma Wali and Jeanna Matthews},
      year={2024},
      eprint={2402.03477},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Training procedure

We have trained this model using the PaperSpace GPU-Cloud service. We used a machine with 8 CPUs, 45GB RAM, and A6000 GPU with 48GB RAM.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3
Training results
Training Loss Epoch Step Validation Loss Accuracy
0.4562 1.0 5978 0.4058 0.8257
0.3904 2.0 11956 0.4075 0.8289
0.3508 3.0 17934 0.4285 0.8267
Framework versions
  • Transformers 4.32.1
  • Pytorch 1.12.1+cu116
  • Datasets 2.4.0
  • Tokenizers 0.12.1

Runs of NorahAlshahrani Adv_BERT_Hard on huggingface.co

186
Total runs
0
24-hour runs
18
3-day runs
178
7-day runs
176
30-day runs

More Information About Adv_BERT_Hard huggingface.co Model

More Adv_BERT_Hard license Visit here:

https://choosealicense.com/licenses/mit

Adv_BERT_Hard huggingface.co

Adv_BERT_Hard huggingface.co is an AI model on huggingface.co that provides Adv_BERT_Hard's model effect (), which can be used instantly with this NorahAlshahrani Adv_BERT_Hard model. huggingface.co supports a free trial of the Adv_BERT_Hard model, and also provides paid use of the Adv_BERT_Hard. Support call Adv_BERT_Hard model through api, including Node.js, Python, http.

NorahAlshahrani Adv_BERT_Hard online free

Adv_BERT_Hard huggingface.co is an online trial and call api platform, which integrates Adv_BERT_Hard's modeling effects, including api services, and provides a free online trial of Adv_BERT_Hard, you can try Adv_BERT_Hard online for free by clicking the link below.

NorahAlshahrani Adv_BERT_Hard online free url in huggingface.co:

https://huggingface.co/NorahAlshahrani/Adv_BERT_Hard

Adv_BERT_Hard install

Adv_BERT_Hard is an open source model from GitHub that offers a free installation service, and any user can find Adv_BERT_Hard on GitHub to install. At the same time, huggingface.co provides the effect of Adv_BERT_Hard install, users can directly use Adv_BERT_Hard installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

Adv_BERT_Hard install url in huggingface.co:

https://huggingface.co/NorahAlshahrani/Adv_BERT_Hard

Url of Adv_BERT_Hard

Provider of Adv_BERT_Hard huggingface.co

NorahAlshahrani
ORGANIZATIONS

Other API from NorahAlshahrani