Feb. 26, 2024, 5:43 a.m. | Homayoun Honari, Mehran Ghafarian Tamizi, Homayoun Najjaran

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.15197v1 Announce Type: cross
Abstract: Safe reinforcement learning (Safe RL) refers to a class of techniques that aim to prevent RL algorithms from violating constraints in the process of decision-making and exploration during trial and error. In this paper, a novel model-free Safe RL algorithm, formulated based on the multi-objective policy optimization framework is introduced where the policy is optimized towards optimality and safety, simultaneously. The optimality is achieved by the environment reward function that is subsequently shaped using a …

abstract aim algorithm algorithms arxiv class constraints cs.ai cs.lg cs.ro cs.sy decision eess.sy error exploration free making multi-objective novel optimization paper policy process reinforcement reinforcement learning safety type via

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US