all AI news
Interpretation of Neural Networks is Susceptible to Universal Adversarial Perturbations
April 23, 2024, 4:44 a.m. | Haniyeh Ehsani Oskouie, Farzan Farnia
cs.LG updates on arXiv.org arxiv.org
Abstract: Interpreting neural network classifiers using gradient-based saliency maps has been extensively studied in the deep learning literature. While the existing algorithms manage to achieve satisfactory performance in application to standard image recognition datasets, recent works demonstrate the vulnerability of widely-used gradient-based interpretation schemes to norm-bounded perturbations adversarially designed for every individual input sample. However, such adversarial perturbations are commonly designed using the knowledge of an input sample, and hence perform sub-optimally in application to an …
abstract adversarial algorithms application arxiv classifiers cs.ai cs.cr cs.cv cs.lg datasets deep learning gradient image image recognition interpretation literature maps network networks neural network neural networks norm performance recognition standard stat.ml type universal vulnerability
More from arxiv.org / cs.LG updates on arXiv.org
Sliced Wasserstein with Random-Path Projecting Directions
1 day, 19 hours ago |
arxiv.org
Learning Extrinsic Dexterity with Parameterized Manipulation Primitives
1 day, 19 hours ago |
arxiv.org
The Un-Kidnappable Robot: Acoustic Localization of Sneaking People
1 day, 19 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York