all AI news
Composing Pre-Trained Object-Centric Representations for Robotics From "What" and "Where" Foundation Models
April 23, 2024, 4:43 a.m. | Junyao Shi, Jianing Qian, Yecheng Jason Ma, Dinesh Jayaraman
cs.LG updates on arXiv.org arxiv.org
Abstract: There have recently been large advances both in pre-training visual representations for robotic control and segmenting unknown category objects in general images. To leverage these for improved robot learning, we propose $\textbf{POCR}$, a new framework for building pre-trained object-centric representations for robotic control. Building on theories of "what-where" representations in psychology and computer vision, we use segmentations from a pre-trained model to stably locate across timesteps, various entities in the scene, capturing "where" information. To …
abstract advances arxiv building control cs.ai cs.cv cs.lg cs.ro foundation framework general images object objects pre-training robot robotic robotics training type visual
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne