March 12, 2024, 4:41 a.m. | Min Cheng, Ruida Zhou, P. R. Kumar, Chao Tian

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.05738v1 Announce Type: new
Abstract: We study Markov potential games under the infinite horizon average reward criterion. Most previous studies have been for discounted rewards. We prove that both algorithms based on independent policy gradient and independent natural policy gradient converge globally to a Nash equilibrium for the average reward criterion. To set the stage for gradient-based methods, we first establish that the average reward is a smooth function of policies and provide sensitivity bounds for the differential value functions, …

abstract algorithms arxiv converge criterion cs.gt cs.lg equilibrium games gradient horizon independent markov nash equilibrium natural policy prove studies study type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Risk Management - Machine Learning and Model Delivery Services, Product Associate - Senior Associate-

@ JPMorgan Chase & Co. | Wilmington, DE, United States

Senior ML Engineer (Speech/ASR)

@ ObserveAI | Bengaluru