Feb. 26, 2024, 5:44 a.m. | Kai Cui, Sascha Hauck, Christian Fabian, Heinz Koeppl

cs.LG updates on arXiv.org arxiv.org

arXiv:2307.06175v2 Announce Type: replace
Abstract: Recent reinforcement learning (RL) methods have achieved success in various domains. However, multi-agent RL (MARL) remains a challenge in terms of decentralization, partial observability and scalability to many agents. Meanwhile, collective behavior requires resolution of the aforementioned challenges, and remains of importance to many state-of-the-art applications such as active matter physics, self-organizing systems, opinion dynamics, and biological or robotic swarms. Here, MARL via mean field control (MFC) offers a potential solution to scalability, but fails …

abstract agent agents artificial arxiv behavior challenge challenges collective control cs.lg cs.ma decentralization decentralized domains importance math.oc mean multi-agent observability observable reinforcement reinforcement learning scalability success terms type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne