all AI news
Learning Audio-Visual Speech Representation by Masked Multimodal Cluster Prediction. (arXiv:2201.02184v1 [eess.AS])
Jan. 7, 2022, 2:10 a.m. | Bowen Shi, Wei-Ning Hsu, Kushal Lakhotia, Abdelrahman Mohamed
cs.CV updates on arXiv.org arxiv.org
Video recordings of speech contain correlated audio and visual information,
providing a strong signal for speech representation learning from the speaker's
lip movements and the produced sound. We introduce Audio-Visual Hidden Unit
BERT (AV-HuBERT), a self-supervised representation learning framework for
audio-visual speech, which masks multi-stream video input and predicts
automatically discovered and iteratively refined multimodal hidden units.
AV-HuBERT learns powerful audio-visual speech representation benefiting both
lip-reading and automatic speech recognition. On the largest public lip-reading
benchmark LRS3 (433 hours), AV-HuBERT …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Management Associate
@ EcoVadis | Ebène, Mauritius
Senior Data Engineer
@ Telstra | Telstra ICC Bengaluru