May 13, 2022, 1:11 a.m. | Zhongyu Li, Jun Zeng, Akshay Thirugnanam, Koushil Sreenath

cs.LG updates on arXiv.org arxiv.org

Bridging model-based safety and model-free reinforcement learning (RL) for
dynamic robots is appealing since model-based methods are able to provide
formal safety guarantees, while RL-based methods are able to exploit the robot
agility by learning from the full-order system dynamics. However, current
approaches to tackle this problem are mostly restricted to simple systems. In
this paper, we propose a new method to combine model-based safety with
model-free reinforcement learning by explicitly finding a low-dimensional model
of the system controlled by …

arxiv free identification learning linear reinforcement reinforcement learning safety

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote