all AI news
Rethinking Adversarial Policies: A Generalized Attack Formulation and Provable Defense in RL
Feb. 19, 2024, 5:43 a.m. | Xiangyu Liu, Souradip Chakraborty, Yanchao Sun, Furong Huang
cs.LG updates on arXiv.org arxiv.org
Abstract: Most existing works focus on direct perturbations to the victim's state/action or the underlying transition dynamics to demonstrate the vulnerability of reinforcement learning agents to adversarial attacks. However, such direct manipulations may not be always realizable. In this paper, we consider a multi-agent setting where a well-trained victim agent $\nu$ is exploited by an attacker controlling another agent $\alpha$ with an \textit{adversarial policy}. Previous models do not account for the possibility that the attacker may …
abstract adversarial adversarial attacks agent agents arxiv attacks cs.ai cs.lg defense dynamics focus generalized multi-agent paper reinforcement reinforcement learning state transition type vulnerability
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Intern Large Language Models Planning (f/m/x)
@ BMW Group | Munich, DE
Data Engineer Analytics
@ Meta | Menlo Park, CA | Remote, US