Nov. 7, 2022, 2:12 a.m. | Chao Yu, Akash Velu, Eugene Vinitsky, Jiaxuan Gao, Yu Wang, Alexandre Bayen, Yi Wu

cs.LG updates on arXiv.org arxiv.org

Proximal Policy Optimization (PPO) is a ubiquitous on-policy reinforcement
learning algorithm but is significantly less utilized than off-policy learning
algorithms in multi-agent settings. This is often due to the belief that PPO is
significantly less sample efficient than off-policy methods in multi-agent
systems. In this work, we carefully study the performance of PPO in cooperative
multi-agent settings. We show that PPO-based multi-agent algorithms achieve
surprisingly strong performance in four popular multi-agent testbeds: the
particle-world environments, the StarCraft multi-agent challenge, Google …

arxiv games ppo

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Lead Data Modeler

@ Sherwin-Williams | Cleveland, OH, United States