all AI news
Launching a Robust Backdoor Attack under Capability Constrained Scenarios. (arXiv:2304.10985v1 [cs.CR])
cs.CV updates on arXiv.org arxiv.org
As deep neural networks continue to be used in critical domains, concerns
over their security have emerged. Deep learning models are vulnerable to
backdoor attacks due to the lack of transparency. A poisoned backdoor model may
perform normally in routine environments, but exhibit malicious behavior when
the input contains a trigger. Current research on backdoor attacks focuses on
improving the stealthiness of triggers, and most approaches require strong
attacker capabilities, such as knowledge of the model structure or control over …
arxiv attacks backdoor behavior control deep learning environments knowledge networks neural networks normally process research security training transparency vulnerable