NVIDIA
NeMo Canary
is a family of multi-lingual multi-tasking models that achieves state-of-the art performance on multiple benchmarks. With 1 billion parameters, Canary-1B supports automatic speech-to-text recognition (ASR) in 4 languages (English, German, French, Spanish) and translation from English to German/French/Spanish and from German/French/Spanish to English with or without punctuation and capitalization (PnC).
Model Architecture
Canary is an encoder-decoder model with FastConformer [1] encoder and Transformer Decoder [2].
With audio features extracted from the encoder, task tokens such as
<source language>
,
<target language>
,
<task>
and
<toggle PnC>
are fed into the Transformer Decoder to trigger the text generation process. Canary uses a concatenated tokenizer [5] from individual
SentencePiece [3] tokenizers of each language, which makes it easy to scale up to more languages.
The Canay-1B model has 24 encoder layers and 24 layers of decoder layers in total.
NVIDIA NeMo
To train, fine-tune or Transcribe with Canary, you will need to install
NVIDIA NeMo
. We recommend you install it after you've installed Cython and latest PyTorch version.
The model is available for use in the NeMo toolkit [4], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Input to Canary can be either a list of paths to audio files or a jsonl manifest file.
If the input is a list of paths, Canary assumes that the audio is English and Transcribes it. I.e., Canary default behaviour is English ASR.
predicted_text = canary_model.transcribe(
paths2audio_files=['path1.wav', 'path2.wav'],
batch_size=16, # batch size to run the inference with
)
To use Canary for transcribing other supported languages or perform Speech-to-Text translation, specify the input as jsonl manifest file, where each line in the file is a dictionary containing the following fields:
# Example of a line in input_manifest.json
{
"audio_filepath":"/path/to/audio.wav", # path to the audio file"duration":1000, # duration of the audio, can be set to `None` if using NeMo main branch"taskname":"asr", # use "s2t_translation" for speech-to-text translation with r1.23, or "ast" if using the NeMo main branch"source_lang":"en", # language of the audio input, set `source_lang`==`target_lang` for ASR, choices=['en','de','es','fr']"target_lang":"en", # language of the text output, choices=['en','de','es','fr']"pnc":"yes", # whether to have PnC output, choices=['yes', 'no']"answer":"na",
}
and then use:
predicted_text = canary_model.transcribe(
"<path to input manifest file>",
batch_size=16, # batch size to run the inference with
)
Automatic Speech-to-text Recognition (ASR)
An example manifest for transcribing English audios can be:
# Example of a line in input_manifest.json
{
"audio_filepath":"/path/to/audio.wav", # path to the audio file"duration":1000, # duration of the audio, can be set to `None` if using NeMo main branch"taskname":"asr",
"source_lang":"en", # language of the audio input, set `source_lang`==`target_lang` for ASR, choices=['en','de','es','fr']"target_lang":"en", # language of the text output, choices=['en','de','es','fr']"pnc":"yes", # whether to have PnC output, choices=['yes', 'no']"answer":"na",
}
Automatic Speech-to-text Translation (AST)
An example manifest for transcribing English audios into German text can be:
# Example of a line in input_manifest.json
{
"audio_filepath":"/path/to/audio.wav", # path to the audio file"duration":1000, # duration of the audio, can be set to `None` if using NeMo main branch"taskname":"s2t_translation", # r1.23 only recognizes "s2t_translation", but "ast" is supported if using the NeMo main branch"source_lang":"en", # language of the audio input, choices=['en','de','es','fr']"target_lang":"de", # language of the text output, choices=['en','de','es','fr']"pnc":"yes", # whether to have PnC output, choices=['yes', 'no']"answer":"na"
}
Alternatively, one can use
transcribe_speech.py
script to do the same.
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/canary-1b"
audio_dir="<path to audio_directory>"# transcribes all the wav files in audio_directory
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/canary-1b"
dataset_manifest="<path to manifest file>"
Input
This model accepts single channel (mono) audio sampled at 16000 Hz, along with the task/languages/PnC tags as input.
Output
The model outputs the transcribed/translated text corresponding to the input audio, in the specified target language and with or without punctuation and capitalization.
Training
Canary-1B is trained using the NVIDIA NeMo toolkit [4] for 150k steps with dynamic bucketing and a batch duration of 360s per GPU on 128 NVIDIA A100 80GB GPUs.
The model can be trained using this
example script
and
base config
.
The tokenizers for these models were built using the text transcripts of the train set with this
script
.
Datasets
The Canary-1B model is trained on a total of 85k hrs of speech data. It consists of 31k hrs of public data, 20k hrs collected by
Suno
, and 34k hrs of in-house data.
As outlined in the paper "Towards Measuring Fairness in AI: the Casual Conversations Dataset", we assessed the canary-1.1b model for fairness. The model was evaluated on the CausalConversations-v1 dataset, and the results are reported as follows:
Gender Bias:
Gender
Male
Female
N/A
Other
Num utterances
19325
24532
926
33
% WER
14.64
12.92
17.88
126.92
Age Bias:
Age Group
(18-30)
(31-45)
(46-85)
(1-100)
Num utterances
15956
14585
13349
43890
% WER
14.64
13.07
13.47
13.76
(Error rates for fairness evaluation are determined by normalizing both the reference and predicted text, similar to the methods used in the evaluations found at
https://github.com/huggingface/open_asr_leaderboard
.)
NVIDIA Riva: Deployment
NVIDIA Riva
, is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded.
Additionally, Riva provides:
World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support
License to use this model is covered by the
CC-BY-NC-4.0
. By downloading the public and release version of the model, you accept the terms and conditions of the CC-BY-NC-4.0 license.
Runs of nvidia canary-1b on huggingface.co
13.4K
Total runs
331
24-hour runs
865
3-day runs
1.7K
7-day runs
62
30-day runs
More Information About canary-1b huggingface.co Model
canary-1b huggingface.co is an AI model on huggingface.co that provides canary-1b's model effect (), which can be used instantly with this nvidia canary-1b model. huggingface.co supports a free trial of the canary-1b model, and also provides paid use of the canary-1b. Support call canary-1b model through api, including Node.js, Python, http.
canary-1b huggingface.co is an online trial and call api platform, which integrates canary-1b's modeling effects, including api services, and provides a free online trial of canary-1b, you can try canary-1b online for free by clicking the link below.
nvidia canary-1b online free url in huggingface.co:
canary-1b is an open source model from GitHub that offers a free installation service, and any user can find canary-1b on GitHub to install. At the same time, huggingface.co provides the effect of canary-1b install, users can directly use canary-1b installed effect in huggingface.co for debugging and trial. It also supports api for free installation.