Aug. 17, 2022, 1:12 a.m. | Rodrigo Mira, Alexandros Haliassos, Stavros Petridis, Björn W. Schuller, Maja Pantic

cs.CV updates on arXiv.org arxiv.org

Video-to-speech synthesis (also known as lip-to-speech) refers to the
translation of silent lip movements into the corresponding audio. This task has
received an increasing amount of attention due to its self-supervised nature
(i.e., can be trained without manual labelling) combined with the ever-growing
collection of audio-visual data available online. Despite these strong
motivations, contemporary video-to-speech works focus mainly on small- to
medium-sized corpora with substantial constraints in both vocabulary and
setting. In this work, we introduce a scalable video-to-speech framework …

arxiv scalable speech video

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Strategy & Management - Private Equity Sector - Manager - Consulting - Location OPEN

@ EY | New York City, US, 10001-8604

Data Engineer- People Analytics

@ Volvo Group | Gothenburg, SE, 40531