MultiIndicHeadlineGeneration is a multilingual, sequence-to-sequence pre-trained model focusing only on Indic languages. It currently supports 11 Indian languages and is finetuned on
IndicBART
checkpoint. You can use MultiIndicHeadlineGeneration model to build natural language generation applications in Indian languages for tasks like summarization, headline generation and other summarization related tasks. Some salient features of the MultiIndicHeadlineGeneration are:
Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5.
The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding.
Trained on large Indic language corpora (1.316 million paragraphs and 5.9 million unique tokens) .
All languages have been represented in Devanagari script to encourage transfer learning among the related languages.
Usage:
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicHeadlineGenerationSS", do_lower_case=False, use_fast=False, keep_accents=True)
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicHeadlineGenerationSS", do_lower_case=False, use_fast=False, keep_accents=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicHeadlineGenerationSS")
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicHeadlineGenerationSS")
# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ['<2as>', '<2bn>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>']
# First tokenize the input and outputs. The format below is how MultiIndicHeadlineGenerationSS was trained so the input should be "Paragraph </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
inp = tokenizer("यूट्यूब या फेसबुक पर वीडियो देखते समय आप भी बफरिंग की वजह से परेशान होते हैं? इसका जवाब हां है तो जल्द ही आपकी सारी समस्या खत्म होने वाली है। दरअसल, टेलीकॉम मिनिस्टर अश्विनी वैष्णव ने पिछले सप्ताह कहा कि अगस्त के अंत तक हर-हाल में '5G' इंटरनेट लॉन्च हो जाएगा। उन्होंने यह भी कहा है कि स्पेक्ट्रम की बिक्री शुरू हो चुकी है और जून तक ये प्रोसेस खत्म होने की संभावना है।</s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[43615, 116, 4426, 46, . . . . 64001, 64006]])
out = tokenizer("<2hi> 5G इंटरनेट का इंतजार हुआ खत्म:अगस्त तक देश में शुरू हो सकती है 5G सर्विस </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[64006, 393, 1690, . . . . 1690, 11999, 64001]])
model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])
# For loss
model_outputs.loss ## This is not label smoothed.
# For logits
model_outputs.logits
# For generation. Pardon the messiness. Note the decoder_start_token_id.
model.eval() # Set dropouts to zero
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=32, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
# Decode to get output strings
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # अगस्त के अंत तक शुरू हो जाएगा '5G' इंटरनेट
Note:
If you wish to use any language written in a non-Devanagari script, then you should first convert it to Devanagari using the
Indic NLP Library
. After you get the output, you should convert it back into the original script.
Benchmarks
Scores on the
MultiIndicHeadlineGeneration
test sets are as follows:
Language
Rouge-1 / Rouge-2 / Rouge-L
as
46.06 / 30.02 / 44.64
bn
34.22 / 19.18 / 32.60
gu
33.49 / 17.49 / 31.79
hi
37.14 / 18.04 / 32.70
kn
64.82 / 53.91 / 64.10
ml
58.69 / 47.18 / 57.94
mr
35.20 / 19.50 / 34.08
or
22.51 / 9.00 / 21.62
pa
46.47 / 29.07 / 43.25
ta
47.39 / 31.39 / 45.94
te
37.69 / 21.89 / 36.66
average
42.15 / 26.97 / 40.48
Contributors
Aman Kumar
Prachi Sahu
Himani Shrotriya
Raj Dabre
Anoop Kunchukuttan
Ratish Puduppully
Mitesh M. Khapra
Pratyush Kumar
Paper
If you use MultiIndicHeadlineGeneration, please cite the following paper:
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
}
Runs of ai4bharat MultiIndicHeadlineGeneration on huggingface.co
3
Total runs
0
24-hour runs
0
3-day runs
0
7-day runs
-5
30-day runs
More Information About MultiIndicHeadlineGeneration huggingface.co Model
MultiIndicHeadlineGeneration huggingface.co
MultiIndicHeadlineGeneration huggingface.co is an AI model on huggingface.co that provides MultiIndicHeadlineGeneration's model effect (), which can be used instantly with this ai4bharat MultiIndicHeadlineGeneration model. huggingface.co supports a free trial of the MultiIndicHeadlineGeneration model, and also provides paid use of the MultiIndicHeadlineGeneration. Support call MultiIndicHeadlineGeneration model through api, including Node.js, Python, http.
MultiIndicHeadlineGeneration huggingface.co is an online trial and call api platform, which integrates MultiIndicHeadlineGeneration's modeling effects, including api services, and provides a free online trial of MultiIndicHeadlineGeneration, you can try MultiIndicHeadlineGeneration online for free by clicking the link below.
ai4bharat MultiIndicHeadlineGeneration online free url in huggingface.co:
MultiIndicHeadlineGeneration is an open source model from GitHub that offers a free installation service, and any user can find MultiIndicHeadlineGeneration on GitHub to install. At the same time, huggingface.co provides the effect of MultiIndicHeadlineGeneration install, users can directly use MultiIndicHeadlineGeneration installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
MultiIndicHeadlineGeneration install url in huggingface.co: