all AI news
Subspace Defense: Discarding Adversarial Perturbations by Learning a Subspace for Clean Signals
March 26, 2024, 4:42 a.m. | Rui Zheng, Yuhao Zhou, Zhiheng Xi, Tao Gui, Qi Zhang, Xuanjing Huang
cs.LG updates on arXiv.org arxiv.org
Abstract: Deep neural networks (DNNs) are notoriously vulnerable to adversarial attacks that place carefully crafted perturbations on normal examples to fool DNNs. To better understand such attacks, a characterization of the features carried by adversarial examples is needed. In this paper, we tackle this challenge by inspecting the subspaces of sample features through spectral analysis. We first empirically show that the features of either clean signals or adversarial perturbations are redundant and span in low-dimensional linear …
abstract adversarial adversarial attacks adversarial examples arxiv attacks cs.cl cs.cr cs.lg defense examples features networks neural networks normal paper type vulnerable
More from arxiv.org / cs.LG updates on arXiv.org
Digital Over-the-Air Federated Learning in Multi-Antenna Systems
2 days, 12 hours ago |
arxiv.org
Bagging Provides Assumption-free Stability
2 days, 12 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Research Scientist, Demography and Survey Science, University Grad
@ Meta | Menlo Park, CA | New York City
Computer Vision Engineer, XR
@ Meta | Burlingame, CA