The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Note
: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model
speech recognition
, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out
this blog
for more in-detail explanation of how to fine-tune the model.
Authors: Heng-Jui Chang, Shu-wen Yang, Hung-yi Lee
Abstract
Self-supervised speech representation learning methods like wav2vec 2.0 and Hidden-unit BERT (HuBERT) leverage unlabeled speech data for pre-training and offer good representations for numerous speech processing tasks. Despite the success of these methods, they require large memory and high pre-training costs, making them inaccessible for researchers in academia and small companies. Therefore, this paper introduces DistilHuBERT, a novel multi-task learning framework to distill hidden representations from a HuBERT model directly. This method reduces HuBERT's size by 75% and 73% faster while retaining most performance in ten different tasks. Moreover, DistilHuBERT required little training time and data, opening the possibilities of pre-training personal and on-device SSL models for speech.
distilhubert huggingface.co is an AI model on huggingface.co that provides distilhubert's model effect (), which can be used instantly with this ntu-spml distilhubert model. huggingface.co supports a free trial of the distilhubert model, and also provides paid use of the distilhubert. Support call distilhubert model through api, including Node.js, Python, http.
distilhubert huggingface.co is an online trial and call api platform, which integrates distilhubert's modeling effects, including api services, and provides a free online trial of distilhubert, you can try distilhubert online for free by clicking the link below.
ntu-spml distilhubert online free url in huggingface.co:
distilhubert is an open source model from GitHub that offers a free installation service, and any user can find distilhubert on GitHub to install. At the same time, huggingface.co provides the effect of distilhubert install, users can directly use distilhubert installed effect in huggingface.co for debugging and trial. It also supports api for free installation.