all AI news
IVLMap: Instance-Aware Visual Language Grounding for Consumer Robot Navigation
March 29, 2024, 4:45 a.m. | Jiacui Huang, Hongtao Zhang, Mingbo Zhao, Zhou Wu
cs.CV updates on arXiv.org arxiv.org
Abstract: Vision-and-Language Navigation (VLN) is a challenging task that requires a robot to navigate in photo-realistic environments with human natural language promptings. Recent studies aim to handle this task by constructing the semantic spatial map representation of the environment, and then leveraging the strong ability of reasoning in large language models for generalizing code for guiding the robot navigation. However, these methods face limitations in instance-level and attribute-level navigation tasks as they cannot distinguish different instances …
arxiv consumer cs.ai cs.cv instance language navigation robot robot navigation type visual
More from arxiv.org / cs.CV updates on arXiv.org
Compact 3D Scene Representation via Self-Organizing Gaussian Grids
1 day, 12 hours ago |
arxiv.org
Fingerprint Matching with Localized Deep Representation
1 day, 12 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne