all AI news
Imperceptible Backdoor Attack: From Input Space to Feature Representation. (arXiv:2205.03190v1 [cs.CR])
Web: http://arxiv.org/abs/2205.03190
May 9, 2022, 1:11 a.m. | Nan Zhong, Zhenxing Qian, Xinpeng Zhang
cs.LG updates on arXiv.org arxiv.org
Backdoor attacks are rapidly emerging threats to deep neural networks (DNNs).
In the backdoor attack scenario, attackers usually implant the backdoor into
the target model by manipulating the training dataset or training process.
Then, the compromised model behaves normally for benign input yet makes
mistakes when the pre-defined trigger appears. In this paper, we analyze the
drawbacks of existing attack approaches and propose a novel imperceptible
backdoor attack. We treat the trigger pattern as a special kind of noise
following …
More from arxiv.org / cs.LG updates on arXiv.org
GUARD: Graph Universal Adversarial Defense. (arXiv:2204.09803v2 [cs.LG] UPDATED)
2 days, 19 hours ago |
arxiv.org
Latest AI/ML/Big Data Jobs
Data Analyst, Patagonia Action Works
@ Patagonia | Remote
Data & Insights Strategy & Innovation General Manager
@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX
Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis
@ Ahmedabad University | Ahmedabad, India
Director, Applied Mathematics & Computational Research Division
@ Lawrence Berkeley National Lab | Berkeley, Ca
Business Data Analyst
@ MainStreet Family Care | Birmingham, AL
Assistant/Associate Professor of the Practice in Business Analytics
@ Georgetown University McDonough School of Business | Washington DC