Feb. 22, 2024, 5:43 a.m. | Raffaele Galliera, Kristen Brent Venable, Matteo Bassani, Niranjan Suri

cs.LG updates on arXiv.org arxiv.org

arXiv:2308.16198v3 Announce Type: replace
Abstract: Efficient information dissemination is crucial for supporting critical operations across domains like disaster response, autonomous vehicles, and sensor networks. This paper introduces a Multi-Agent Reinforcement Learning (MARL) approach as a significant step forward in achieving more decentralized, efficient, and collaborative information dissemination. We propose a Partially Observable Stochastic Game (POSG) formulation for information dissemination empowering each agent to decide on message forwarding independently, based on the observation of their one-hop neighborhood. This constitutes a significant …

abstract agent arxiv autonomous autonomous vehicles collaborative cs.ai cs.lg cs.ma cs.ni decentralized disaster disaster response domains graph graph-based information multi-agent networks observable operations paper reinforcement reinforcement learning sensor type vehicles

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne