all AI news
Improving the Robustness of DistilHuBERT to Unseen Noisy Conditions via Data Augmentation, Curriculum Learning, and Multi-Task Enhancement. (arXiv:2211.06562v1 [cs.SD])
Nov. 15, 2022, 2:16 a.m. | Heitor R. Guimarães, Arthur Pimentel, Anderson R. Avila, Mehdi Rezagholizadeh, Tiago H. Falk
cs.CL updates on arXiv.org arxiv.org
Self-supervised speech representation learning aims to extract meaningful
factors from the speech signal that can later be used across different
downstream tasks, such as speech and/or emotion recognition. Existing models,
such as HuBERT, however, can be fairly large thus may not be suitable for edge
speech applications. Moreover, realistic applications typically involve speech
corrupted by noise and room reverberation, hence models need to provide
representations that are robust to such environmental factors. In this study,
we build on the so-called …
arxiv augmentation curriculum curriculum learning data robustness
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
DevOps Engineer (Data Team)
@ Reward Gateway | Sofia/Plovdiv