all AI news
Strong Transferable Adversarial Attacks via Ensembled Asymptotically Normal Distribution Learning
April 1, 2024, 4:42 a.m. | Zhengwei Fang, Rui Wang, Tao Huang, Liping Jing
cs.LG updates on arXiv.org arxiv.org
Abstract: Strong adversarial examples are crucial for evaluating and enhancing the robustness of deep neural networks. However, the performance of popular attacks is usually sensitive, for instance, to minor image transformations, stemming from limited information -- typically only one input example, a handful of white-box source models, and undefined defense strategies. Hence, the crafted adversarial examples are prone to overfit the source model, which hampers their transferability to unknown architectures. In this paper, we propose an …
abstract adversarial adversarial attacks adversarial examples arxiv attacks box cs.cv cs.lg distribution example examples however image information instance networks neural networks normal performance popular robustness stemming type via
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Data Scientist (Database Development)
@ Nasdaq | Bengaluru-Affluence