Nov. 7, 2022, 2:14 a.m. | Yufeng Zheng, Victoria Fernández Abrevaya, Marcel C. Bühler, Xu Chen, Michael J. Black, Otmar Hilliges

cs.CV updates on arXiv.org arxiv.org

Traditional 3D morphable face models (3DMMs) provide fine-grained control
over expression but cannot easily capture geometric and appearance details.
Neural volumetric representations approach photorealism but are hard to animate
and do not generalize well to unseen expressions. To tackle this problem, we
propose IMavatar (Implicit Morphable avatar), a novel method for learning
implicit head avatars from monocular videos. Inspired by the fine-grained
control mechanisms afforded by conventional 3DMMs, we represent the expression-
and pose- related deformations via learned blendshapes and …

arxiv avatar avatars head videos

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Machine Learning Engineer (m/f/d)

@ StepStone Group | Düsseldorf, Germany

2024 GDIA AI/ML Scientist - Supplemental

@ Ford Motor Company | United States