Web: http://arxiv.org/abs/2205.02058

May 5, 2022, 1:12 a.m. | Rodrigo Mira, Alexandros Haliassos, Stavros Petridis, Björn W. Schuller, Maja Pantic

cs.LG updates on arXiv.org arxiv.org

Video-to-speech synthesis (also known as lip-to-speech) refers to the
translation of silent lip movements into the corresponding audio. This task has
received an increasing amount of attention due to its self-supervised nature
(i.e., can be trained without manual labelling) combined with the ever-growing
collection of audio-visual data available online. Despite these strong
motivations, contemporary video-to-speech works focus mainly on small- to
medium-sized corpora with substantial constraints in both vocabulary and
setting. In this work, we introduce a scalable video-to-speech framework …

arxiv scalable speech video

More from arxiv.org / cs.LG updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California