all AI news
Pose2Room: Understanding 3D Scenes from Human Activities. (arXiv:2112.03030v2 [cs.RO] UPDATED)
July 15, 2022, 1:13 a.m. | Yinyu Nie, Angela Dai, Xiaoguang Han, Matthias Nießner
cs.CV updates on arXiv.org arxiv.org
With wearable IMU sensors, one can estimate human poses from wearable devices
without requiring visual input~\cite{von2017sparse}. In this work, we pose the
question: Can we reason about object structure in real-world environments
solely from human trajectory information? Crucially, we observe that human
motion and interactions tend to give strong information about the objects in a
scene -- for instance a person sitting indicates the likely presence of a chair
or sofa. To this end, we propose P2R-Net to learn a …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Business Intelligence Analyst
@ Rappi | COL-Bogotá
Applied Scientist II
@ Microsoft | Redmond, Washington, United States