April 17, 2024, 4:42 a.m. | Peiyuan Zhi, Zhiyuan Zhang, Muzhi Han, Zeyu Zhang, Zhitian Li, Ziyuan Jiao, Baoxiong Jia, Siyuan Huang

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.10220v1 Announce Type: cross
Abstract: Autonomous robot navigation and manipulation in open environments require reasoning and replanning with closed-loop feedback. We present COME-robot, the first closed-loop framework utilizing the GPT-4V vision-language foundation model for open-ended reasoning and adaptive planning in real-world scenarios. We meticulously construct a library of action primitives for robot exploration, navigation, and manipulation, serving as callable execution modules for GPT-4V in task planning. On top of these modules, GPT-4V serves as the brain that can accomplish multimodal …

abstract arxiv autonomous construct cs.ai cs.cv cs.lg cs.ro environments exploration feedback foundation foundation model framework gpt gpt-4v language library loop manipulation mobile navigation planning reasoning robot robot navigation type vision vision-language world

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist - XR Input Perception

@ Meta | Sausalito, CA | Redmond, WA | Burlingame, CA

Sr. Data Engineer

@ Oportun | Remote - India