ai4bharat / MultiIndicWikiBioUnified

huggingface.co
Total runs: 6
24-hour runs: 0
7-day runs: 0
30-day runs: -6
Model's Last Updated: 3월 29 2022
text2text-generation

Introduction of MultiIndicWikiBioUnified

Model Details of MultiIndicWikiBioUnified

MultiIndicWikiBioUnified

MultiIndicWikiBioUnified is a multilingual, sequence-to-sequence pre-trained model, a IndicBART checkpoint fine-tuned on the 9 languages of IndicWikiBio dataset. For fine-tuning details, see the paper . You can use MultiIndicWikiBio to build biography generation applications for Indian languages by fine-tuning the model with supervised training data. Some salient features of the MultiIndicWikiBio are:

  • Supported languages: Assamese, Bengali, Hindi, Oriya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5.
  • The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for fine-tuning and decoding.
  • Fine-tuned on an Indic language corpora (34,653 examples).
  • All languages have been represented in Devanagari script to encourage transfer learning among the related languages.

You can read more about MultiIndicWikiBioUnified in this paper .

Using this model in transformers
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicWikiBioUnified", do_lower_case=False, use_fast=False, keep_accents=True)
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicWikiBioUnified", do_lower_case=False, use_fast=False, keep_accents=True)

model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicWikiBioUnified")
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicWikiBioUnified")

# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ['<2as>', '<2bn>', '<2hi>', '<2kn>', '<2ml>', '<2or>', '<2pa>', '<2ta>', '<2te>']

# First tokenize the input and outputs. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>". 
inp = tokenizer("<TAG> name </TAG> भीखा लाल <TAG> office </TAG> विधायक - 318 - हसनगंज विधान सभा निर्वाचन क्षेत्र , उत्तर प्रदेश <TAG> term </TAG> 1957 से 1962 <TAG> nationality </TAG> भारतीय</s><2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids 

out = tokenizer("<2hi> भीखा लाल ,भारत के उत्तर प्रदेश की दूसरी विधानसभा सभा में विधायक रहे। </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids 
model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])

# For loss
model_outputs.loss ## This is not label smoothed.

# For logits
model_outputs.logits

# For generation. Pardon the messiness. Note the decoder_start_token_id.
model.eval() # Set dropouts to zero

model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2hi>"))

# Decode to get output strings
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)

print(decoded_output) # भीखा लाल ,भारत के उत्तर प्रदेश की दूसरी विधानसभा सभा में विधायक रहे।

# Disclaimer
Note that if your output language is not Hindi or Marathi, you should convert its script from Devanagari to the desired language using the [Indic NLP Library](https://github.com/AI4Bharat/indic-bart/blob/main/indic_scriptmap.py).

Note:

If you wish to use any language written in a non-Devanagari script, then you should first convert it to Devanagari using the Indic NLP Library . After you get the output, you should convert it back into the original script.

Benchmarks

Scores on the IndicWikiBio test sets are as follows:

Language RougeL
as 56.28
bn 57.42
hi 67.48
kn 40.01
ml 38.84
or 67.13
pa 52.88
ta 51.82
te 51.43
Citation

If you use this model, please cite the following paper:

@inproceedings{Kumar2022IndicNLGSM,
  title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
  author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
  year={2022},
  url = "https://arxiv.org/abs/2203.05437"
  }

License

The model is available under the MIT License.

Runs of ai4bharat MultiIndicWikiBioUnified on huggingface.co

6
Total runs
0
24-hour runs
0
3-day runs
0
7-day runs
-6
30-day runs

More Information About MultiIndicWikiBioUnified huggingface.co Model

MultiIndicWikiBioUnified huggingface.co

MultiIndicWikiBioUnified huggingface.co is an AI model on huggingface.co that provides MultiIndicWikiBioUnified's model effect (), which can be used instantly with this ai4bharat MultiIndicWikiBioUnified model. huggingface.co supports a free trial of the MultiIndicWikiBioUnified model, and also provides paid use of the MultiIndicWikiBioUnified. Support call MultiIndicWikiBioUnified model through api, including Node.js, Python, http.

MultiIndicWikiBioUnified huggingface.co Url

https://huggingface.co/ai4bharat/MultiIndicWikiBioUnified

ai4bharat MultiIndicWikiBioUnified online free

MultiIndicWikiBioUnified huggingface.co is an online trial and call api platform, which integrates MultiIndicWikiBioUnified's modeling effects, including api services, and provides a free online trial of MultiIndicWikiBioUnified, you can try MultiIndicWikiBioUnified online for free by clicking the link below.

ai4bharat MultiIndicWikiBioUnified online free url in huggingface.co:

https://huggingface.co/ai4bharat/MultiIndicWikiBioUnified

MultiIndicWikiBioUnified install

MultiIndicWikiBioUnified is an open source model from GitHub that offers a free installation service, and any user can find MultiIndicWikiBioUnified on GitHub to install. At the same time, huggingface.co provides the effect of MultiIndicWikiBioUnified install, users can directly use MultiIndicWikiBioUnified installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

MultiIndicWikiBioUnified install url in huggingface.co:

https://huggingface.co/ai4bharat/MultiIndicWikiBioUnified

Url of MultiIndicWikiBioUnified

MultiIndicWikiBioUnified huggingface.co Url

Provider of MultiIndicWikiBioUnified huggingface.co

ai4bharat
ORGANIZATIONS

Other API from ai4bharat

huggingface.co

Total runs: 1.1M
Run Growth: 781.3K
Growth Rate: 67.72%
Updated: 8월 07 2022
huggingface.co

Total runs: 49.5K
Run Growth: 3.2K
Growth Rate: 6.71%
Updated: 12월 21 2022
huggingface.co

Total runs: 1.7K
Run Growth: 350
Growth Rate: 19.84%
Updated: 8월 07 2022
huggingface.co

Total runs: 1.1K
Run Growth: -3.1K
Growth Rate: -276.92%
Updated: 3월 11 2024
huggingface.co

Total runs: 223
Run Growth: -81
Growth Rate: -23.62%
Updated: 10월 18 2024
huggingface.co

Total runs: 8
Run Growth: -3
Growth Rate: -11.54%
Updated: 10월 18 2024