Wav2Vec2-Large-LV60 finetuned on multi-lingual Common Voice
This checkpoint leverages the pretrained checkpoint
wav2vec2-large-lv60
and is fine-tuned on
CommonVoice
to recognize phonetic labels in multiple languages.
When using the model make sure that your speech input is sampled at 16kHz.
Note that the model outputs a string of phonetic labels. A dictionary mapping phonetic labels to words
has to be used to map the phonetic output labels to output words.
Authors: Qiantong Xu, Alexei Baevski, Michael Auli
Abstract
Recent progress in self-training, self-supervised pretraining and unsupervised learning enabled well performing speech recognition systems without any labeled data. However, in many cases there is labeled data available for related languages which is not utilized by these methods. This paper extends previous work on zero-shot cross-lingual transfer learning by fine-tuning a multilingually pretrained wav2vec 2.0 model to transcribe unseen languages. This is done by mapping phonemes of the training languages to the target language using articulatory features. Experiments show that this simple method significantly outperforms prior work which introduced task-specific architectures and used only part of a monolingually pretrained model.
To transcribe audio files the model can be used as a standalone acoustic model as follows:
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-lv-60-espeak-cv-ft")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-lv-60-espeak-cv-ft")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values
# retrieve logitswith torch.no_grad():
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
# => should give ['m ɪ s t ɚ k w ɪ l t ɚ ɹ ɪ z ð ɪ ɐ p ɑː s əl ʌ v ð ə m ɪ d əl k l æ s ᵻ z æ n d w iː ɑːɹ ɡ l æ d t ə w ɛ l k ə m h ɪ z ɡ ɑː s p əl']
Runs of facebook wav2vec2-lv-60-espeak-cv-ft on huggingface.co
19.4K
Total runs
-21
24-hour runs
31
3-day runs
-33.2K
7-day runs
-31.5K
30-day runs
More Information About wav2vec2-lv-60-espeak-cv-ft huggingface.co Model
More wav2vec2-lv-60-espeak-cv-ft license Visit here:
wav2vec2-lv-60-espeak-cv-ft huggingface.co is an AI model on huggingface.co that provides wav2vec2-lv-60-espeak-cv-ft's model effect (), which can be used instantly with this facebook wav2vec2-lv-60-espeak-cv-ft model. huggingface.co supports a free trial of the wav2vec2-lv-60-espeak-cv-ft model, and also provides paid use of the wav2vec2-lv-60-espeak-cv-ft. Support call wav2vec2-lv-60-espeak-cv-ft model through api, including Node.js, Python, http.
wav2vec2-lv-60-espeak-cv-ft huggingface.co is an online trial and call api platform, which integrates wav2vec2-lv-60-espeak-cv-ft's modeling effects, including api services, and provides a free online trial of wav2vec2-lv-60-espeak-cv-ft, you can try wav2vec2-lv-60-espeak-cv-ft online for free by clicking the link below.
facebook wav2vec2-lv-60-espeak-cv-ft online free url in huggingface.co:
wav2vec2-lv-60-espeak-cv-ft is an open source model from GitHub that offers a free installation service, and any user can find wav2vec2-lv-60-espeak-cv-ft on GitHub to install. At the same time, huggingface.co provides the effect of wav2vec2-lv-60-espeak-cv-ft install, users can directly use wav2vec2-lv-60-espeak-cv-ft installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
wav2vec2-lv-60-espeak-cv-ft install url in huggingface.co: