March 26, 2024, 4:42 a.m. | Abhi Kamboj, Minh Do

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.15444v1 Announce Type: cross
Abstract: Despite living in a multi-sensory world, most AI models are limited to textual and visual understanding of human motion and behavior. In fact, full situational awareness of human motion could best be understood through a combination of sensors. In this survey we investigate how knowledge can be transferred and utilized amongst modalities for Human Activity/Action Recognition (HAR), i.e. cross-modality transfer learning. We motivate the importance and potential of IMU data and its applicability in cross-modality …

abstract ai models arxiv behavior combination cs.ai cs.cv cs.lg eess.iv eess.sp human modal recognition sensors sensory situational awareness survey textual through transfer transfer learning type understanding visual world

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Consultant Senior Power BI & Azure - CDI - H/F

@ Talan | Lyon, France