May 8, 2023, 12:46 a.m. | Yulong Wang, Tianxiang Li, Shenghong Li, Xin Yuan, Wei Ni

cs.CV updates on arXiv.org arxiv.org

Deep Neural Networks (DNNs) are vulnerable to adversarial examples, while
adversarial attack models, e.g., DeepFool, are on the rise and outrunning
adversarial example detection techniques. This paper presents a new adversarial
example detector that outperforms state-of-the-art detectors in identifying the
latest adversarial attacks on image datasets. Specifically, we propose to use
sentiment analysis for adversarial example detection, qualified by the
progressively manifesting impact of an adversarial perturbation on the
hidden-layer feature maps of a DNN under attack. Accordingly, we design …

adversarial attacks analysis art arxiv attacks datasets detection example examples image image datasets image detection networks neural networks paper sentiment sentiment analysis state vulnerable

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

.NET Software Engineer (AI Focus)

@ Boskalis | Papendrecht, Netherlands