April 1, 2024, 4:42 a.m. | Zhengwei Fang, Rui Wang, Tao Huang, Liping Jing

cs.LG updates on arXiv.org arxiv.org

arXiv:2209.11964v2 Announce Type: replace
Abstract: Strong adversarial examples are crucial for evaluating and enhancing the robustness of deep neural networks. However, the performance of popular attacks is usually sensitive, for instance, to minor image transformations, stemming from limited information -- typically only one input example, a handful of white-box source models, and undefined defense strategies. Hence, the crafted adversarial examples are prone to overfit the source model, which hampers their transferability to unknown architectures. In this paper, we propose an …

abstract adversarial adversarial attacks adversarial examples arxiv attacks box cs.cv cs.lg distribution example examples however image information instance networks neural networks normal performance popular robustness stemming type via

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US