Sept. 27, 2022, 1:13 a.m. | Rogerio Bonatti, Sai Vemprala, Shuang Ma, Felipe Frujeri, Shuhang Chen, Ashish Kapoor

cs.CV updates on arXiv.org arxiv.org

Robotics has long been a field riddled with complex systems architectures
whose modules and connections, whether traditional or learning-based, require
significant human expertise and prior knowledge. Inspired by large pre-trained
language models, this work introduces a paradigm for pre-training a general
purpose representation that can serve as a starting point for multiple tasks on
a given robot. We present the Perception-Action Causal Transformer (PACT), a
generative transformer-based architecture that aims to build representations
directly from robot data in a self-supervised …

arxiv perception pre-training robotics training transformer

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Applied Scientist, Control Stack, AWS Center for Quantum Computing

@ Amazon.com | Pasadena, California, USA

Specialist Marketing with focus on ADAS/AD f/m/d

@ AVL | Graz, AT

Machine Learning Engineer, PhD Intern

@ Instacart | United States - Remote

Supervisor, Breast Imaging, Prostate Center, Ultrasound

@ University Health Network | Toronto, ON, Canada

Senior Manager of Data Science (Recommendation Science)

@ NBCUniversal | New York, NEW YORK, United States