July 22, 2022, 1:10 a.m. | Yu Zhang, James Qin, Daniel S. Park, Wei Han, Chung-Cheng Chiu, Ruoming Pang, Quoc V. Le, Yonghui Wu

cs.LG updates on arXiv.org arxiv.org

We employ a combination of recent developments in semi-supervised learning
for automatic speech recognition to obtain state-of-the-art results on
LibriSpeech utilizing the unlabeled audio of the Libri-Light dataset. More
precisely, we carry out noisy student training with SpecAugment using giant
Conformer models pre-trained using wav2vec 2.0 pre-training. By doing so, we
are able to achieve word-error-rates (WERs) 1.4%/2.6% on the LibriSpeech
test/test-other sets against the current state-of-the-art WERs 1.7%/3.3%.

arxiv automatic speech recognition learning semi-supervised semi-supervised learning speech speech recognition supervised learning

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Enterprise Data Quality, Senior Analyst

@ Toyota North America | Plano

Data Analyst & Audit Management Software (AMS) Coordinator

@ World Vision | Philippines - Home Working

Product Manager Power BI Platform Tech I&E Operational Insights

@ ING | HBP (Amsterdam - Haarlerbergpark)

Sr. Director, Software Engineering, Clinical Data Strategy

@ Moderna | USA-Washington-Seattle-1099 Stewart Street

Data Engineer (Data as a Service)

@ Xplor | Atlanta, GA, United States