Aug. 16, 2022, 1:12 a.m. | Andrea Burns, Deniz Arsan, Sanjna Agrawal, Ranjitha Kumar, Kate Saenko, Bryan A. Plummer

cs.CL updates on arXiv.org arxiv.org

Vision-language navigation (VLN), in which an agent follows language
instruction in a visual environment, has been studied under the premise that
the input command is fully feasible in the environment. Yet in practice, a
request may not be possible due to language ambiguity or environment changes.
To study VLN with unknown command feasibility, we introduce a new dataset
Mobile app Tasks with Iterative Feedback (MoTIF), where the goal is to complete
a natural language command in a mobile app. Mobile …

arxiv dataset interactive language navigation vision

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst

@ Aviva | UK - Norwich - Carrara - 1st Floor

Werkstudent im Bereich Performance Engineering mit Computer Vision (w/m/div.) - anteilig remote

@ Bosch Group | Stuttgart, Lollar, Germany

Applied Research Scientist - NLP (Senior)

@ Snorkel AI | Hybrid / San Francisco, CA

Associate Principal Engineer, Machine Learning

@ Nagarro | Remote, India