Web: http://arxiv.org/abs/2209.11133

Sept. 23, 2022, 1:11 a.m. | Rogerio Bonatti, Sai Vemprala, Shuang Ma, Felipe Frujeri, Shuhang Chen, Ashish Kapoor

cs.LG updates on arXiv.org arxiv.org

Robotics has long been a field riddled with complex systems architectures
whose modules and connections, whether traditional or learning-based, require
significant human expertise and prior knowledge. Inspired by large pre-trained
language models, this work introduces a paradigm for pre-training a general
purpose representation that can serve as a starting point for multiple tasks on
a given robot. We present the Perception-Action Causal Transformer (PACT), a
generative transformer-based architecture that aims to build representations
directly from robot data in a self-supervised …

arxiv perception pre-training robotics training transformer

More from arxiv.org / cs.LG updates on arXiv.org

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Senior Research Engineer, Applied Language

@ DeepMind | Mountain View, California, US

Machine Learning Engineer

@ Bluevine | Austin, TX

Lead Manager - Analytics & Data Science

@ Tide | India(Remote)

Machine Learning Engineer

@ Gtmhub | Indore, Madhya Pradesh, India