all AI news
DAE-Talker: High Fidelity Speech-Driven Talking Face Generation with Diffusion Autoencoder
March 4, 2024, 5:45 a.m. | Chenpeng Du, Qi Chen, Xie Chen, Kai Yu
cs.CV updates on arXiv.org arxiv.org
Abstract: While recent research has made significant progress in speech-driven talking face generation, the quality of the generated video still lags behind that of real recordings. One reason for this is the use of handcrafted intermediate representations like facial landmarks and 3DMM coefficients, which are designed based on human knowledge and are insufficient to precisely describe facial movements. Additionally, these methods require an external pretrained model for extracting these representations, whose performance sets an upper bound …
abstract arxiv autoencoder cs.cv cs.mm diffusion face fidelity generated intermediate progress quality reason research speech type video
More from arxiv.org / cs.CV updates on arXiv.org
Compact 3D Scene Representation via Self-Organizing Gaussian Grids
1 day, 17 hours ago |
arxiv.org
Fingerprint Matching with Localized Deep Representation
1 day, 17 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne