all AI news
FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models
March 19, 2024, 4:51 a.m. | Shivangi Aneja, Justus Thies, Angela Dai, Matthias Nie{\ss}ner
cs.CV updates on arXiv.org arxiv.org
Abstract: We introduce FaceTalk, a novel generative approach designed for synthesizing high-fidelity 3D motion sequences of talking human heads from input audio signal. To capture the expressive, detailed nature of human heads, including hair, ears, and finer-scale eye movements, we propose to couple speech signal with the latent space of neural parametric head models to create high-fidelity, temporally coherent motion sequences. We propose a new latent diffusion model for this task, operating in the expression space …
abstract arxiv audio cs.ai cs.cv cs.gr cs.sd diffusion eess.as fidelity generative hair head human movements nature novel parametric scale signal speech type
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Machine Learning Research Scientist
@ d-Matrix | San Diego, Ca