Multilingual language model. This model was trained on the
61
languages from
25
language families (see the list below).
Dataset
Model was pretrained on a 600Gb of texts, mostly from MC4 and Wikipedia. Training data was deduplicated, the text deduplication includes 64-bit hashing of each text in the corpus for keeping texts with a unique hash. We also filter the documents based on their text compression rate using zlib4. The most strongly and weakly compressing deduplicated texts are discarded.
Here is the table with number of tokens for each language in the pretraining corpus on a logarithmic scale:
Languages
Afrikaans (af), Arabic (ar), Armenian (hy), Azerbaijani (az), Basque (eu), Bashkir (ba), Belarusian (be), Bengali (bn), Bulgarian (bg), Burmese (my), Buryat (bxr), Chuvash (cv), Danish (da), English (en), Estonian (et), Finnish (fi), French (fr), Georgian (ka), German (de), Greek (el), Hebrew (he), Hindi (hi), Hungarian (hu), Indonesian (id), Italian (it), Japanese (ja), Javanese (jv), Kalmyk (xal), Kazakh (kk), Korean (ko), Kyrgyz (ky), Latvian (lv), Lithuanian (lt), Malay (ms), Malayalam (ml), Marathi (mr), Mongolian (mn), Ossetian (os), Persian (fa), Polish (pl), Portuguese (pt), Romanian (ro), Russian (ru), Spanish (es), Swedish (sv), Swahili (sw), Tatar (tt), Telugu (te), Thai (th), Turkish (tr), Turkmen (tk), Tuvan (tyv), Ukrainian (uk), Uzbek (uz), Vietnamese (vi), Yakut (sax), Yoruba (yo)
By language family
Language Family
Languages
Afro-Asiatic
Arabic (ar), Hebrew (he)
Austro-Asiatic
Vietnamese (vi)
Austronesian
Indonesian (id), Javanese (jv), Malay (ms), Tagalog (tl)
Baltic
Latvian (lv), Lithuanian (lt)
Basque
Basque (eu)
Dravidian
Malayalam (ml), Tamil (ta), Telugu (te)
Indo-European (Armenian)
Armenian (hy)
Indo-European (Indo-Aryan)
Bengali (bn), Marathi (mr), Hindi (hi), Urdu (ur)
Indo-European (Germanic)
Afrikaans (af), Danish (da), English (en), German (de), Swedish (sv)
Indo-European (Romance)
French (fr), Italian (it), Portuguese (pt), Romanian (ro), Spanish (es)
Indo-European (Greek)
Greek (el)
Indo-European (Iranian)
Ossetian (os), Tajik (tg), Persian (fa)
Japonic
Japanese (ja)
Kartvelian
Georgian (ka)
Koreanic
Korean (ko)
Kra-Dai
Thai (th)
Mongolic
Buryat (bxr), Kalmyk (xal), Mongolian (mn)
Niger-Congo
Swahili (sw), Yoruba (yo)
Slavic
Belarusian (be), Bulgarian (bg), Russian (ru), Ukrainian (uk), Polish (pl)
Azerbaijani (az), Chuvash (cv), Turkish (tr), Turkmen (tk)
Turkic (Siberian)
Tuvan (tyv), Yakut (sax)
Uralic
Estonian (et), Finnish (fi), Hungarian (hu)
Technical details
The models are pretrained on 16 V100 GPUs for 600k training steps with a set of fixed hyperparameters: vocabulary size of 100k, context window of 2048, learning rate of 2e−4, and batch size of 4.
The mGPT architecture is based on GPT-3. We use the architecture description by Brown et al., the code base on GPT-2 (Radford et al., 2019) in the HuggingFace library (Wolf et al., 2020) and Megatron-LM (Shoeybi et al., 2019).
Perplexity
The mGPT13B model achieves the best perplexities within the 2-to-10 score range for the majority of languages, including Dravidian (Malayalam, Tamil, Telugu), Indo-Aryan (Bengali, Hindi, Marathi), Slavic (Belarusian, Ukrainian, Russian, Bulgarian), Sino-Tibetan (Burmese), Kipchak (Bashkir, Kazakh) and others. Higher perplexities up to 20 are for only seven languages from different families.
Language-wise perplexity results
Family-wise perplexity results
The scores are averaged over the number of languages within each family.
Runs of ai-forever mGPT-13B on huggingface.co
2.3K
Total runs
24
24-hour runs
-7
3-day runs
-97
7-day runs
-360
30-day runs
More Information About mGPT-13B huggingface.co Model
mGPT-13B huggingface.co is an AI model on huggingface.co that provides mGPT-13B's model effect (), which can be used instantly with this ai-forever mGPT-13B model. huggingface.co supports a free trial of the mGPT-13B model, and also provides paid use of the mGPT-13B. Support call mGPT-13B model through api, including Node.js, Python, http.
mGPT-13B huggingface.co is an online trial and call api platform, which integrates mGPT-13B's modeling effects, including api services, and provides a free online trial of mGPT-13B, you can try mGPT-13B online for free by clicking the link below.
ai-forever mGPT-13B online free url in huggingface.co:
mGPT-13B is an open source model from GitHub that offers a free installation service, and any user can find mGPT-13B on GitHub to install. At the same time, huggingface.co provides the effect of mGPT-13B install, users can directly use mGPT-13B installed effect in huggingface.co for debugging and trial. It also supports api for free installation.