April 24, 2024, 4:42 a.m. | Haozhe Tian, Homayoun Hamedmoghadam, Robert Shorten, Pietro Ferraro

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.15199v1 Announce Type: new
Abstract: Reinforcement Learning (RL) is a powerful method for controlling dynamic systems, but its learning mechanism can lead to unpredictable actions that undermine the safety of critical systems. Here, we propose RL with Adaptive Control Regularization (RL-ACR) that ensures RL safety by combining the RL policy with a control regularizer that hard-codes safety constraints over forecasted system behaviors. The adaptability is achieved by using a learnable "focus" weight trained to maximize the cumulative reward of the …

abstract arxiv control cs.lg dynamic regularization reinforcement reinforcement learning safe safety systems type undermine

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Machine Learning Research Scientist

@ d-Matrix | San Diego, Ca