March 12, 2024, 4:48 a.m. | Chao Zhang, Mohan Li, Ignas Budvytis, Stephan Liwicki

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.06846v1 Announce Type: new
Abstract: Multimodal learning has advanced the performance for many vision-language tasks. However, most existing works in embodied dialog research focus on navigation and leave the localization task understudied. The few existing dialog-based localization approaches assume the availability of entire dialog prior to localizaiton, which is impractical for deployed dialog-based localization. In this paper, we propose DiaLoc, a new dialog-based localization framework which aligns with a real human operator behavior. Specifically, we produce an iterative refinement of …

abstract advanced arxiv availability cs.cv dialog embodied focus however iterative language localization multimodal multimodal learning navigation performance prior research tasks type vision

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

DevOps Engineer (Data Team)

@ Reward Gateway | Sofia/Plovdiv