all AI news
Imperceptible Backdoor Attack: From Input Space to Feature Representation. (arXiv:2205.03190v1 [cs.CR])
cs.LG updates on arXiv.org arxiv.org
Backdoor attacks are rapidly emerging threats to deep neural networks (DNNs).
In the backdoor attack scenario, attackers usually implant the backdoor into
the target model by manipulating the training dataset or training process.
Then, the compromised model behaves normally for benign input yet makes
mistakes when the pre-defined trigger appears. In this paper, we analyze the
drawbacks of existing attack approaches and propose a novel imperceptible
backdoor attack. We treat the trigger pattern as a special kind of noise