March 4, 2024, 5:45 a.m. | Chenpeng Du, Qi Chen, Xie Chen, Kai Yu

cs.CV updates on arXiv.org arxiv.org

arXiv:2303.17550v5 Announce Type: replace
Abstract: While recent research has made significant progress in speech-driven talking face generation, the quality of the generated video still lags behind that of real recordings. One reason for this is the use of handcrafted intermediate representations like facial landmarks and 3DMM coefficients, which are designed based on human knowledge and are insufficient to precisely describe facial movements. Additionally, these methods require an external pretrained model for extracting these representations, whose performance sets an upper bound …

abstract arxiv autoencoder cs.cv cs.mm diffusion face fidelity generated intermediate progress quality reason research speech type video

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne