all AI news
Training Autoregressive Speech Recognition Models with Limited in-domain Supervision. (arXiv:2210.15135v1 [cs.CL])
Oct. 28, 2022, 1:16 a.m. | Chak-Fai Li, Francis Keith, William Hartmann, Matthew Snover
cs.CL updates on arXiv.org arxiv.org
Advances in self-supervised learning have significantly reduced the amount of
transcribed audio required for training. However, the majority of work in this
area is focused on read speech. We explore limited supervision in the domain of
conversational speech. While we assume the amount of in-domain data is limited,
we augment the model with open source read speech data. The XLS-R model has
been shown to perform well with limited adaptation data and serves as a strong
baseline. We use untranscribed …
arxiv speech speech recognition speech recognition models training
More from arxiv.org / cs.CL updates on arXiv.org
ALBA: Adaptive Language-based Assessments for Mental Health
2 days, 11 hours ago |
arxiv.org
PACE: Improving Prompt with Actor-Critic Editing for Large Language Model
2 days, 11 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US