Web: http://arxiv.org/abs/2011.14045

June 24, 2022, 1:11 a.m. | Haojing Shen, Sihong Chen, Ran Wang, Xizhao Wang

cs.LG updates on arXiv.org arxiv.org

In this paper, we propose a defence strategy to improve adversarial
robustness by incorporating hidden layer representation. The key of this
defence strategy aims to compress or filter input information including
adversarial perturbation. And this defence strategy can be regarded as an
activation function which can be applied to any kind of neural network. We also
prove theoretically the effectiveness of this defense strategy under certain
conditions. Besides, incorporating hidden layer representation we propose three
types of adversarial attacks to …

arxiv attacks lg representation

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY