all AI news
Strong Transferable Adversarial Attacks via Ensembled Asymptotically Normal Distribution Learning
April 1, 2024, 4:42 a.m. | Zhengwei Fang, Rui Wang, Tao Huang, Liping Jing
cs.LG updates on arXiv.org arxiv.org
Abstract: Strong adversarial examples are crucial for evaluating and enhancing the robustness of deep neural networks. However, the performance of popular attacks is usually sensitive, for instance, to minor image transformations, stemming from limited information -- typically only one input example, a handful of white-box source models, and undefined defense strategies. Hence, the crafted adversarial examples are prone to overfit the source model, which hampers their transferability to unknown architectures. In this paper, we propose an …
abstract adversarial adversarial attacks adversarial examples arxiv attacks box cs.cv cs.lg distribution example examples however image information instance networks neural networks normal performance popular robustness stemming type via
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US