all AI news
Defending Observation Attacks in Deep Reinforcement Learning via Detection and Denoising. (arXiv:2206.07188v1 [cs.LG])
Web: http://arxiv.org/abs/2206.07188
cs.LG updates on arXiv.org arxiv.org
Neural network policies trained using Deep Reinforcement Learning (DRL) are
well-known to be susceptible to adversarial attacks. In this paper, we consider
attacks manifesting as perturbations in the observation space managed by the
external environment. These attacks have been shown to downgrade policy
performance significantly. We focus our attention on well-trained deterministic
and stochastic neural network policies in the context of continuous control
benchmarks subject to four well-studied observation space adversarial attacks.
To defend against these attacks, we propose a …
arxiv attacks deep denoising detection learning lg reinforcement reinforcement learning