all AI news
SoundingActions: Learning How Actions Sound from Narrated Egocentric Videos
April 9, 2024, 4:47 a.m. | Changan Chen, Kumar Ashutosh, Rohit Girdhar, David Harwath, Kristen Grauman
cs.CV updates on arXiv.org arxiv.org
Abstract: We propose a novel self-supervised embedding to learn how actions sound from narrated in-the-wild egocentric videos. Whereas existing methods rely on curated data with known audio-visual correspondence, our multimodal contrastive-consensus coding (MC3) embedding reinforces the associations between audio, language, and vision when all modality pairs agree, while diminishing those associations when any one pair does not. We show our approach can successfully discover how the long tail of human actions sound from egocentric video, outperforming …
abstract arxiv audio coding consensus cs.cv cs.mm cs.sd data eess.as embedding language learn multimodal novel sound type videos vision visual
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Alternance DATA/AI Engineer (H/F)
@ SQLI | Le Grand-Quevilly, France