Web: http://arxiv.org/abs/2110.04484

June 20, 2022, 1:12 a.m. | Han Zhu, Li Wang, Jindong Wang, Gaofeng Cheng, Pengyuan Zhang, Yonghong Yan

cs.CL updates on arXiv.org arxiv.org

Self-supervised pre-training could effectively improve the performance of
low-resource automatic speech recognition (ASR). However, existing
self-supervised pre-training are task-agnostic, i.e., could be applied to
various downstream tasks. Although it enlarges the scope of its application,
the capacity of the pre-trained model is not fully utilized for the ASR task,
and the learned representations may not be optimal for ASR. In this work, in
order to build a better pre-trained model for low-resource ASR, we propose a
pre-training approach called wav2vec-S, …

arxiv asr pre-training semi-supervised training

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY