June 24, 2022, 1:11 a.m. | Haojing Shen, Sihong Chen, Ran Wang, Xizhao Wang

cs.LG updates on arXiv.org arxiv.org

In this paper, we propose a defence strategy to improve adversarial
robustness by incorporating hidden layer representation. The key of this
defence strategy aims to compress or filter input information including
adversarial perturbation. And this defence strategy can be regarded as an
activation function which can be applied to any kind of neural network. We also
prove theoretically the effectiveness of this defense strategy under certain
conditions. Besides, incorporating hidden layer representation we propose three
types of adversarial attacks to …

arxiv attacks lg representation

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineer, Data Tools - Full Stack

@ DoorDash | Pune, India

Senior Data Analyst

@ Artsy | New York City