Feb. 19, 2024, 5:43 a.m. | Xiangyu Liu, Souradip Chakraborty, Yanchao Sun, Furong Huang

cs.LG updates on arXiv.org arxiv.org

arXiv:2305.17342v2 Announce Type: replace
Abstract: Most existing works focus on direct perturbations to the victim's state/action or the underlying transition dynamics to demonstrate the vulnerability of reinforcement learning agents to adversarial attacks. However, such direct manipulations may not be always realizable. In this paper, we consider a multi-agent setting where a well-trained victim agent $\nu$ is exploited by an attacker controlling another agent $\alpha$ with an \textit{adversarial policy}. Previous models do not account for the possibility that the attacker may …

abstract adversarial adversarial attacks agent agents arxiv attacks cs.ai cs.lg defense dynamics focus generalized multi-agent paper reinforcement reinforcement learning state transition type vulnerability

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Intern Large Language Models Planning (f/m/x)

@ BMW Group | Munich, DE

Data Engineer Analytics

@ Meta | Menlo Park, CA | Remote, US