all AI news
LEMON: Learning 3D Human-Object Interaction Relation from 2D Images
April 2, 2024, 7:49 p.m. | Yuhang Yang, Wei Zhai, Hongchen Luo, Yang Cao, Zheng-Jun Zha
cs.CV updates on arXiv.org arxiv.org
Abstract: Learning 3D human-object interaction relation is pivotal to embodied AI and interaction modeling. Most existing methods approach the goal by learning to predict isolated interaction elements, e.g., human contact, object affordance, and human-object spatial relation, primarily from the perspective of either the human or the object. Which underexploit certain correlations between the interaction counterparts (human and object), and struggle to address the uncertainty in interactions. Actually, objects' functionalities potentially affect humans' interaction intentions, which reveals …
abstract arxiv cs.cv embodied embodied ai human images modeling object perspective pivotal spatial type
More from arxiv.org / cs.CV updates on arXiv.org
Compact 3D Scene Representation via Self-Organizing Gaussian Grids
1 day, 20 hours ago |
arxiv.org
Fingerprint Matching with Localized Deep Representation
1 day, 20 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne