all AI news
Empowering Embodied Visual Tracking with Visual Foundation Models and Offline RL
April 16, 2024, 4:48 a.m. | Fangwei Zhong, Kui Wu, Hai Ci, Churan Wang, Hao Chen
cs.CV updates on arXiv.org arxiv.org
Abstract: Embodied visual tracking is to follow a target object in dynamic 3D environments using an agent's egocentric vision. This is a vital and challenging skill for embodied agents. However, existing methods suffer from inefficient training and poor generalization. In this paper, we propose a novel framework that combines visual foundation models (VFM) and offline reinforcement learning (offline RL) to empower embodied visual tracking. We use a pre-trained VFM, such as ``Tracking Anything", to extract semantic …
abstract agent agents arxiv cs.ai cs.cv cs.ro dynamic embodied environments foundation however object offline paper tracking training type vision visual vital
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Software Engineer, Data Tools - Full Stack
@ DoorDash | Pune, India
Senior Data Analyst
@ Artsy | New York City