ai4bharat / IndicBARTSS

huggingface.co
Total runs: 177
24-hour runs: 3
7-day runs: 28
30-day runs: 67
Model's Last Updated: März 15 2022
text2text-generation

Introduction of IndicBARTSS

Model Details of IndicBARTSS

IndicBARTSS is a multilingual, sequence-to-sequence pre-trained model focusing on Indic languages and English. It currently supports 11 Indian languages and is based on the mBART architecture. You can use IndicBARTSS model to build natural language generation applications for Indian languages by finetuning the model with supervised training data for tasks like machine translation, summarization, question generation, etc. Some salient features of the IndicBARTSS are:

  • Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, Telugu and English. Not all of these languages are supported by mBART50 and mT5.
  • The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding.
  • Trained on large Indic language corpora (452 million sentences and 9 billion tokens) which also includes Indian English content.
  • Unlike ai4bharat/IndicBART each language is written in its own script so you do not need to perform any script mapping to/from Devanagari.

You can read more about IndicBARTSS in this paper .

For detailed documentation, look here: https://github.com/AI4Bharat/indic-bart/ and https://indicnlp.ai4bharat.org/indic-bart/

Pre-training corpus

We used the IndicCorp data spanning 12 languages with 452 million sentences (9 billion tokens). The model was trained using the text-infilling objective used in mBART.

Usage:

from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("ai4bharat/IndicBARTSS", do_lower_case=False, use_fast=False, keep_accents=True)

# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/IndicBARTSS", do_lower_case=False, use_fast=False, keep_accents=True)

model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/IndicBARTSS")

# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/IndicBARTSS")

# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ['<2as>', '<2bn>', '<2en>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>']

# First tokenize the input and outputs. The format below is how IndicBARTSS was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>". 
inp = tokenizer("I am a boy </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[  466,  1981,    80, 25573, 64001, 64004]])

out = tokenizer("<2hi> मैं  एक लड़का हूँ </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[64006,   942,    43, 32720,  8384, 64001]])

model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])

# For loss
model_outputs.loss ## This is not label smoothed.

# For logits
model_outputs.logits

# For generation. Pardon the messiness. Note the decoder_start_token_id.

model.eval() # Set dropouts to zero

model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))


# Decode to get output strings

decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)

print(decoded_output) # I am a boy

# What if we mask?

inp = tokenizer("I am [MASK] </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids

model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))

decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)

print(decoded_output) # I am happy

inp = tokenizer("मैं [MASK] हूँ </s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids

model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))

decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)

print(decoded_output) # मैं जानता हूँ

inp = tokenizer("मला [MASK] पाहिजे </s> <2mr>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids

model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))

decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)

print(decoded_output) # मला ओळखलं पाहिजे

Notes:

  1. This is compatible with the latest version of transformers but was developed with version 4.3.2 so consider using 4.3.2 if possible.
  2. While I have only shown how to get logits and loss and how to generate outputs, you can do pretty much everything the MBartForConditionalGeneration class can do as in https://huggingface.co/docs/transformers/model_doc/mbart#transformers.MBartForConditionalGeneration
  3. Note that the tokenizer I have used is based on sentencepiece and not BPE. Therefore, I used the AlbertTokenizer class and not the MBartTokenizer class.

Fine-tuning on a downstream task

  1. If you wish to fine-tune this model, then you can do so using the YANMTT toolkit, following the instructions here .
  2. (Untested) Alternatively, you may use the official huggingface scripts for translation and summarization .

Contributors

  • Raj Dabre
  • Himani Shrotriya
  • Anoop Kunchukuttan
  • Ratish Puduppully
  • Mitesh M. Khapra
  • Pratyush Kumar

Paper

If you use IndicBARTSS, please cite the following paper:

@misc{dabre2021indicbart,
      title={IndicBART: A Pre-trained Model for Natural Language Generation of Indic Languages}, 
      author={Raj Dabre and Himani Shrotriya and Anoop Kunchukuttan and Ratish Puduppully and Mitesh M. Khapra and Pratyush Kumar},
      year={2021},
      eprint={2109.02903},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
    }    

License

The model is available under the MIT License.

Runs of ai4bharat IndicBARTSS on huggingface.co

177
Total runs
3
24-hour runs
5
3-day runs
28
7-day runs
67
30-day runs

More Information About IndicBARTSS huggingface.co Model

IndicBARTSS huggingface.co

IndicBARTSS huggingface.co is an AI model on huggingface.co that provides IndicBARTSS's model effect (), which can be used instantly with this ai4bharat IndicBARTSS model. huggingface.co supports a free trial of the IndicBARTSS model, and also provides paid use of the IndicBARTSS. Support call IndicBARTSS model through api, including Node.js, Python, http.

ai4bharat IndicBARTSS online free

IndicBARTSS huggingface.co is an online trial and call api platform, which integrates IndicBARTSS's modeling effects, including api services, and provides a free online trial of IndicBARTSS, you can try IndicBARTSS online for free by clicking the link below.

ai4bharat IndicBARTSS online free url in huggingface.co:

https://huggingface.co/ai4bharat/IndicBARTSS

IndicBARTSS install

IndicBARTSS is an open source model from GitHub that offers a free installation service, and any user can find IndicBARTSS on GitHub to install. At the same time, huggingface.co provides the effect of IndicBARTSS install, users can directly use IndicBARTSS installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

IndicBARTSS install url in huggingface.co:

https://huggingface.co/ai4bharat/IndicBARTSS

Url of IndicBARTSS

IndicBARTSS huggingface.co Url

Provider of IndicBARTSS huggingface.co

ai4bharat
ORGANIZATIONS

Other API from ai4bharat

huggingface.co

Total runs: 1.2M
Run Growth: 781.7K
Growth Rate: 67.75%
Updated: August 07 2022
huggingface.co

Total runs: 46.8K
Run Growth: 1.5K
Growth Rate: 3.13%
Updated: Dezember 21 2022
huggingface.co

Total runs: 1.7K
Run Growth: 311
Growth Rate: 18.14%
Updated: August 07 2022
huggingface.co

Total runs: 1.1K
Run Growth: -3.1K
Growth Rate: -277.26%
Updated: März 11 2024
huggingface.co

Total runs: 223
Run Growth: -81
Growth Rate: -23.62%
Updated: Oktober 18 2024
huggingface.co

Total runs: 24
Run Growth: 9
Growth Rate: 42.86%
Updated: Oktober 18 2024
huggingface.co

Total runs: 18
Run Growth: 7
Growth Rate: 30.43%
Updated: Oktober 18 2024
huggingface.co

Total runs: 12
Run Growth: -3
Growth Rate: -11.54%
Updated: Oktober 18 2024
huggingface.co

Total runs: 3
Run Growth: 17
Growth Rate: 56.67%
Updated: Oktober 18 2024
huggingface.co

Total runs: 3
Run Growth: 10
Growth Rate: 40.00%
Updated: Oktober 18 2024