all AI news
A virtual reality-based method for examining audiovisual prosody perception. (arXiv:2209.05745v1 [cs.CL])
Sept. 14, 2022, 1:15 a.m. | Hartmut Meister, Isa Samira Winter, Moritz Waeachtler, Pascale Sandmann, Khaled Abdellatif
cs.CL updates on arXiv.org arxiv.org
Prosody plays a vital role in verbal communication. Acoustic cues of prosody
have been examined extensively. However, prosodic characteristics are not only
perceived auditorily, but also visually based on head and facial movements. The
purpose of this report is to present a method for examining audiovisual prosody
using virtual reality. We show that animations based on a virtual human provide
motion cues similar to those obtained from video recordings of a real talker.
The use of virtual reality opens up …
More from arxiv.org / cs.CL updates on arXiv.org
VAL: Interactive Task Learning with GPT Dialog Parsing
1 day, 4 hours ago |
arxiv.org
DBCopilot: Scaling Natural Language Querying to Massive Databases
1 day, 4 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Alternant Data Engineering
@ Aspire Software | Angers, FR
Senior Software Engineer, Generative AI
@ Google | Dublin, Ireland