Oct. 17, 2023, 1 p.m. | Anthony Alford

InfoQ - AI, ML & Data Engineering www.infoq.com

Google DeepMind recently announced Robotics Transformer 2 (RT-2), a vision-language-action (VLA) AI model for controlling robots. RT-2 uses a fine-tuned LLM to output motion control commands. It can perform tasks not explicitly included in its training data and improves on baseline models by up to 3x on emergent skill evaluations.

By Anthony Alford

ai ai model anthony control data deepmind google google deepmind language large language models llm ml & data engineering robot robotics robotics transformer 2 robots rt-2 tasks training training data transformer vision

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US