Nov. 5, 2023, 6:42 a.m. | Matthias Gerstgrasser, Tom Danino, Sarah Keren

cs.LG updates on arXiv.org arxiv.org

We present a novel multi-agent RL approach, Selective Multi-Agent Prioritized
Experience Relay, in which agents share with other agents a limited number of
transitions they observe during training. The intuition behind this is that
even a small number of relevant experiences from other agents could help each
agent learn. Unlike many other multi-agent RL algorithms, this approach allows
for largely decentralized training, requiring only a limited communication
channel between agents. We show that our approach outperforms baseline
no-sharing decentralized training …

agent agents arxiv experience intuition learn multi-agent novel observe reinforcement reinforcement learning small training transitions

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne