all AI news
[R] RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control - Google DeepMind 2023 - Is able to perform multi-stage semantic reasoning and can interpret commands not present in the robot training data!
July 29, 2023, 7:31 p.m. | /u/Singularian2501
Machine Learning www.reddit.com
Blog: [https://robotics-transformer2.github.io/](https://robotics-transformer2.github.io/)
Blog: [https://www.deepmind.com/blog/rt-2-new-model-translates-vision-and-language-into-action](https://www.deepmind.com/blog/rt-2-new-model-translates-vision-and-language-into-action)
Github ( RT-1 as of now) : [https://github.com/google-research/robotics\_transformer](https://github.com/google-research/robotics_transformer)
Abstract:
>We study how vision-language models trained on Internet-scale data can be incorporated directly into end-to-end robotic control to boost generalization and enable emergent semantic reasoning. Our goal is to enable a single end-to-end trained model to both learn to map robot observations to actions and enjoy the benefits of large-scale pretraining on language and vision-language data from the web. To this end, we propose to …
abstract benefits boost control data internet language language data language models learn machinelearning map reasoning robot scale semantic study vision web
More from www.reddit.com / Machine Learning
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Data Engineer (m/f/d)
@ Project A Ventures | Berlin, Germany
Principle Research Scientist
@ Analog Devices | US, MA, Boston