Contrastive learning has shown remarkable success in the field of multimodal representation learning. In this paper, we propose a pipeline of contrastive language-audio pretraining to develop an audio representation by combining audio data with natural language descriptions. To accomplish this target, we first release LAION-Audio-630K, a large collection of 633,526 audio-text pairs from different data sources. Second, we construct a contrastive language-audio pretraining model by considering different audio encoders and text encoders. We incorporate the feature fusion mechanism and keyword-to-caption augmentation into the model design to further enable the model to process audio inputs of variable lengths and enhance the performance. Third, we perform comprehensive experiments to evaluate our model across three tasks: text-to-audio retrieval, zero-shot audio classification, and supervised audio classification. The results demonstrate that our model achieves superior performance in text-to-audio retrieval task. In audio classification tasks, the model achieves state-of-the-art performance in the zero-shot setting and is able to obtain performance comparable to models' results in the non-zero-shot setting. LAION-Audio-630K and the proposed model are both available to the public.
Usage
You can use this model for zero shot audio classification or extracting audio and/or textual features.
Uses
Perform zero-shot audio classification
Using
pipeline
from datasets import load_dataset
from transformers import pipeline
dataset = load_dataset("ashraq/esc50")
audio = dataset["train"]["audio"][-1]["array"]
audio_classifier = pipeline(task="zero-shot-audio-classification", model="laion/clap-htsat-fused")
output = audio_classifier(audio, candidate_labels=["Sound of a dog", "Sound of vaccum cleaner"])
print(output)
>>> [{"score": 0.999, "label": "Sound of a dog"}, {"score": 0.001, "label": "Sound of vaccum cleaner"}]
Run the model:
You can also get the audio and text embeddings using
ClapModel
Run the model on CPU:
from datasets import load_dataset
from transformers import ClapModel, ClapProcessor
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = librispeech_dummy[0]
model = ClapModel.from_pretrained("laion/clap-htsat-fused")
processor = ClapProcessor.from_pretrained("laion/clap-htsat-fused")
inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt")
audio_embed = model.get_audio_features(**inputs)
Run the model on GPU:
from datasets import load_dataset
from transformers import ClapModel, ClapProcessor
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = librispeech_dummy[0]
model = ClapModel.from_pretrained("laion/clap-htsat-fused").to(0)
processor = ClapProcessor.from_pretrained("laion/clap-htsat-fused")
inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt").to(0)
audio_embed = model.get_audio_features(**inputs)
Citation
If you are using this model for your work, please consider citing the original paper:
@misc{https://doi.org/10.48550/arxiv.2211.06687,
doi = {10.48550/ARXIV.2211.06687},
url = {https://arxiv.org/abs/2211.06687},
author = {Wu, Yusong and Chen, Ke and Zhang, Tianyu and Hui, Yuchen and Berg-Kirkpatrick, Taylor and Dubnov, Shlomo},
keywords = {Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
title = {Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
Runs of laion clap-htsat-fused on huggingface.co
17.0K
Total runs
543
24-hour runs
932
3-day runs
684
7-day runs
3.2K
30-day runs
More Information About clap-htsat-fused huggingface.co Model
clap-htsat-fused huggingface.co is an AI model on huggingface.co that provides clap-htsat-fused's model effect (), which can be used instantly with this laion clap-htsat-fused model. huggingface.co supports a free trial of the clap-htsat-fused model, and also provides paid use of the clap-htsat-fused. Support call clap-htsat-fused model through api, including Node.js, Python, http.
clap-htsat-fused huggingface.co is an online trial and call api platform, which integrates clap-htsat-fused's modeling effects, including api services, and provides a free online trial of clap-htsat-fused, you can try clap-htsat-fused online for free by clicking the link below.
laion clap-htsat-fused online free url in huggingface.co:
clap-htsat-fused is an open source model from GitHub that offers a free installation service, and any user can find clap-htsat-fused on GitHub to install. At the same time, huggingface.co provides the effect of clap-htsat-fused install, users can directly use clap-htsat-fused installed effect in huggingface.co for debugging and trial. It also supports api for free installation.