all AI news
Architectural Backdoors in Neural Networks. (arXiv:2206.07840v1 [cs.LG])
Web: http://arxiv.org/abs/2206.07840
June 17, 2022, 1:10 a.m. | Mikel Bober-Irizar, Ilia Shumailov, Yiren Zhao, Robert Mullins, Nicolas Papernot
cs.LG updates on arXiv.org arxiv.org
Machine learning is vulnerable to adversarial manipulation. Previous
literature has demonstrated that at the training stage attackers can manipulate
data and data sampling procedures to control model behaviour. A common attack
goal is to plant backdoors i.e. force the victim model to learn to recognise a
trigger known only by the adversary. In this paper, we introduce a new class of
backdoor attacks that hide inside model architectures i.e. in the inductive
bias of the functions used to train. These …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
Data Science Intern
@ NannyML | Remote
Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY