The model was pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz.
Abstract
Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.
from transformers import Wav2Vec2FeatureExtractor, WavLMForXVector
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('microsoft/wavlm-base-sv')
model = WavLMForXVector.from_pretrained('microsoft/wavlm-base-sv')
# audio files are decoded on the fly
inputs = feature_extractor(dataset[:2]["audio"]["array"], return_tensors="pt")
embeddings = model(**inputs).embeddings
embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu()
# the resulting embeddings can be used for cosine similarity-based retrieval
cosine_sim = torch.nn.CosineSimilarity(dim=-1)
similarity = cosine_sim(embeddings[0], embeddings[1])
threshold = 0.86# the optimal threshold is dataset-dependentif similarity < threshold:
print("Speakers are not the same!")
More Information About wavlm-base-sv huggingface.co Model
wavlm-base-sv huggingface.co
wavlm-base-sv huggingface.co is an AI model on huggingface.co that provides wavlm-base-sv's model effect (), which can be used instantly with this microsoft wavlm-base-sv model. huggingface.co supports a free trial of the wavlm-base-sv model, and also provides paid use of the wavlm-base-sv. Support call wavlm-base-sv model through api, including Node.js, Python, http.
wavlm-base-sv huggingface.co is an online trial and call api platform, which integrates wavlm-base-sv's modeling effects, including api services, and provides a free online trial of wavlm-base-sv, you can try wavlm-base-sv online for free by clicking the link below.
microsoft wavlm-base-sv online free url in huggingface.co:
wavlm-base-sv is an open source model from GitHub that offers a free installation service, and any user can find wavlm-base-sv on GitHub to install. At the same time, huggingface.co provides the effect of wavlm-base-sv install, users can directly use wavlm-base-sv installed effect in huggingface.co for debugging and trial. It also supports api for free installation.