Web: http://arxiv.org/abs/2206.07188

June 16, 2022, 1:10 a.m. | Zikang Xiong, Joe Eappen, He Zhu, Suresh Jagannathan

cs.LG updates on arXiv.org arxiv.org

Neural network policies trained using Deep Reinforcement Learning (DRL) are
well-known to be susceptible to adversarial attacks. In this paper, we consider
attacks manifesting as perturbations in the observation space managed by the
external environment. These attacks have been shown to downgrade policy
performance significantly. We focus our attention on well-trained deterministic
and stochastic neural network policies in the context of continuous control
benchmarks subject to four well-studied observation space adversarial attacks.
To defend against these attacks, we propose a …

arxiv attacks deep denoising detection learning lg reinforcement reinforcement learning

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY