April 23, 2024, 4:43 a.m. | Junyao Shi, Jianing Qian, Yecheng Jason Ma, Dinesh Jayaraman

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.13474v1 Announce Type: cross
Abstract: There have recently been large advances both in pre-training visual representations for robotic control and segmenting unknown category objects in general images. To leverage these for improved robot learning, we propose $\textbf{POCR}$, a new framework for building pre-trained object-centric representations for robotic control. Building on theories of "what-where" representations in psychology and computer vision, we use segmentations from a pre-trained model to stably locate across timesteps, various entities in the scene, capturing "where" information. To …

abstract advances arxiv building control cs.ai cs.cv cs.lg cs.ro foundation framework general images object objects pre-training robot robotic robotics training type visual

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US