all AI news
Multimodal analysis of the predictability of hand-gesture properties. (arXiv:2108.05762v3 [cs.HC] UPDATED)
Jan. 17, 2022, 2:10 a.m. | Taras Kucherenko, Rajmund Nagy, Michael Neff, Hedvig Kjellström, Gustav Eje Henter
cs.LG updates on arXiv.org arxiv.org
Embodied conversational agents benefit from being able to accompany their
speech with gestures. Although many data-driven approaches to gesture
generation have been proposed in recent years, it is still unclear whether such
systems can consistently generate gestures that convey meaning. We investigate
which gesture properties (phase, category, and semantics) can be predicted from
speech text and/or audio using contemporary deep learning. In extensive
experiments, we show that gesture properties related to gesture meaning
(semantics and category) are predictable from text …
More from arxiv.org / cs.LG updates on arXiv.org
Generalized Schr\"odinger Bridge Matching
1 day, 3 hours ago |
arxiv.org
Tight bounds on Pauli channel learning without entanglement
1 day, 3 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Analyst
@ SEAKR Engineering | Englewood, CO, United States
Data Analyst II
@ Postman | Bengaluru, India
Data Architect
@ FORSEVEN | Warwick, GB
Director, Data Science
@ Visa | Washington, DC, United States
Senior Manager, Data Science - Emerging ML
@ Capital One | McLean, VA