April 1, 2024, 4:41 a.m. | Hei Yi Mak, Flint Xiaofeng Fan, Luca A. Lanzend\"orfer, Cheston Tan, Wei Tsang Ooi, Roger Wattenhofer

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.20156v1 Announce Type: new
Abstract: In this study, we delve into Federated Reinforcement Learning (FedRL) in the context of value-based agents operating across diverse Markov Decision Processes (MDPs). Existing FedRL methods typically aggregate agents' learning by averaging the value functions across them to improve their performance. However, this aggregation strategy is suboptimal in heterogeneous environments where agents converge to diverse optimal value functions. To address this problem, we introduce the Convergence-AwarE SAmpling with scReening (CAESAR) aggregation scheme designed to enhance …

abstract agents arxiv context convergence cs.ai cs.lg decision diverse functions however markov performance processes reinforcement reinforcement learning sampling screening study them through type value

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

#13721 - Data Engineer - AI Model Testing

@ Qualitest | Miami, Florida, United States

Elasticsearch Administrator

@ ManTech | 201BF - Customer Site, Chantilly, VA