ai4bharat / MultiIndicWikiBioSS

huggingface.co
Total runs: 3
24-hour runs: 0
7-day runs: -4
30-day runs: -12
Model's Last Updated: March 29 2022
text2text-generation

Introduction of MultiIndicWikiBioSS

Model Details of MultiIndicWikiBioSS

MultiIndicWikiBioSS

MultiIndicWikiBioSS is a multilingual, sequence-to-sequence pre-trained model, a IndicBARTSS checkpoint fine-tuned on the 9 languages of IndicWikiBio dataset. For fine-tuning details, see the paper . You can use MultiIndicWikiBioSS to build biography generation applications for Indian languages by fine-tuning the model with supervised training data. Some salient features of the MultiIndicWikiBioSS are:

  • Supported languages: Assamese, Bengali, Hindi, Oriya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5.
  • The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding.
  • Fine-tuned on an Indic language corpora (34,653 examples).
  • Unlike ai4bharat/MultiIndicWikiBioUnified, each language is written in its own script, so you do not need to perform any script mapping to/from Devanagari.

You can read more about MultiIndicWikiBioSS in this paper .

Using this model in transformers
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicWikiBioSS", do_lower_case=False, use_fast=False, keep_accents=True)
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicWikiBioSS", do_lower_case=False, use_fast=False, keep_accents=True)

model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicWikiBioSS")
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicWikiBioSS")

# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ['<2as>', '<2bn>', '<2hi>', '<2kn>', '<2ml>', '<2or>', '<2pa>', '<2ta>', '<2te>']

# First tokenize the input and outputs. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>". 
inp = tokenizer("<TAG> name </TAG> भीखा लाल <TAG> office </TAG> विधायक - 318 - हसनगंज विधान सभा निर्वाचन क्षेत्र , उत्तर प्रदेश <TAG> term </TAG> 1957 से 1962 <TAG> nationality </TAG> भारतीय</s><2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids 

out = tokenizer("<2hi> भीखा लाल ,भारत के उत्तर प्रदेश की दूसरी विधानसभा सभा में विधायक रहे। </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids 

model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])

# For loss
model_outputs.loss ## This is not label smoothed.

# For logits
model_outputs.logits

# For generation. Pardon the messiness. Note the decoder_start_token_id.
model.eval() # Set dropouts to zero

model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2hi>"))

# Decode to get output strings
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # __भीखा लाल ,भारत के उत्तर प्रदेश की दूसरी विधानसभा सभा में विधायक रहे।
Benchmarks

Scores on the IndicWikiBio test sets are as follows:

Language RougeL
as 56.50
bn 56.58
hi 67.34
kn 39.37
ml 38.42
or 70.71
pa 52.78
ta 51.11
te 51.72
Citation

If you use this model, please cite the following paper:

@inproceedings{Kumar2022IndicNLGSM,
  title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
  author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
  year={2022},
  url = "https://arxiv.org/abs/2203.05437"
  }

License

The model is available under the MIT License.

Runs of ai4bharat MultiIndicWikiBioSS on huggingface.co

3
Total runs
0
24-hour runs
0
3-day runs
-4
7-day runs
-12
30-day runs

More Information About MultiIndicWikiBioSS huggingface.co Model

MultiIndicWikiBioSS huggingface.co

MultiIndicWikiBioSS huggingface.co is an AI model on huggingface.co that provides MultiIndicWikiBioSS's model effect (), which can be used instantly with this ai4bharat MultiIndicWikiBioSS model. huggingface.co supports a free trial of the MultiIndicWikiBioSS model, and also provides paid use of the MultiIndicWikiBioSS. Support call MultiIndicWikiBioSS model through api, including Node.js, Python, http.

MultiIndicWikiBioSS huggingface.co Url

https://huggingface.co/ai4bharat/MultiIndicWikiBioSS

ai4bharat MultiIndicWikiBioSS online free

MultiIndicWikiBioSS huggingface.co is an online trial and call api platform, which integrates MultiIndicWikiBioSS's modeling effects, including api services, and provides a free online trial of MultiIndicWikiBioSS, you can try MultiIndicWikiBioSS online for free by clicking the link below.

ai4bharat MultiIndicWikiBioSS online free url in huggingface.co:

https://huggingface.co/ai4bharat/MultiIndicWikiBioSS

MultiIndicWikiBioSS install

MultiIndicWikiBioSS is an open source model from GitHub that offers a free installation service, and any user can find MultiIndicWikiBioSS on GitHub to install. At the same time, huggingface.co provides the effect of MultiIndicWikiBioSS install, users can directly use MultiIndicWikiBioSS installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

MultiIndicWikiBioSS install url in huggingface.co:

https://huggingface.co/ai4bharat/MultiIndicWikiBioSS

Url of MultiIndicWikiBioSS

MultiIndicWikiBioSS huggingface.co Url

Provider of MultiIndicWikiBioSS huggingface.co

ai4bharat
ORGANIZATIONS

Other API from ai4bharat

huggingface.co

Total runs: 1.2M
Run Growth: 781.7K
Growth Rate: 67.75%
Updated: August 07 2022
huggingface.co

Total runs: 46.8K
Run Growth: 1.5K
Growth Rate: 3.13%
Updated: December 21 2022
huggingface.co

Total runs: 1.7K
Run Growth: 311
Growth Rate: 18.14%
Updated: August 07 2022
huggingface.co

Total runs: 1.1K
Run Growth: -3.1K
Growth Rate: -277.26%
Updated: March 11 2024
huggingface.co

Total runs: 223
Run Growth: -81
Growth Rate: -23.62%
Updated: October 18 2024
huggingface.co

Total runs: 24
Run Growth: 9
Growth Rate: 42.86%
Updated: October 18 2024
huggingface.co

Total runs: 18
Run Growth: 7
Growth Rate: 30.43%
Updated: October 18 2024
huggingface.co

Total runs: 12
Run Growth: -3
Growth Rate: -11.54%
Updated: October 18 2024
huggingface.co

Total runs: 3
Run Growth: 17
Growth Rate: 56.67%
Updated: October 18 2024
huggingface.co

Total runs: 3
Run Growth: 10
Growth Rate: 40.00%
Updated: October 18 2024