June 23, 2022, 1:11 a.m. | Aivar Sootla, Alexander I. Cowen-Rivers, Taher Jafferjee, Ziyan Wang, David Mguni, Jun Wang, Haitham Bou-Ammar

cs.LG updates on arXiv.org arxiv.org

Satisfying safety constraints almost surely (or with probability one) can be
critical for the deployment of Reinforcement Learning (RL) in real-life
applications. For example, plane landing and take-off should ideally occur with
probability one. We address the problem by introducing Safety Augmented (Saute)
Markov Decision Processes (MDPs), where the safety constraints are eliminated
by augmenting them into the state-space and reshaping the objective. We show
that Saute MDP satisfies the Bellman equation and moves us closer to solving
Safe RL …

arxiv augmentation learning lg reinforcement reinforcement learning rl state

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US