May 8, 2023, 12:47 a.m. | Juanjuan Weng, Zhiming Luo, Dazhen Lin, Shaozi Li, Zhun Zhong

cs.CV updates on arXiv.org arxiv.org

Recent research has shown that Deep Neural Networks (DNNs) are highly
vulnerable to adversarial samples, which are highly transferable and can be
used to attack other unknown black-box models. To improve the transferability
of adversarial samples, several feature-based adversarial attack methods have
been proposed to disrupt neuron activation in the middle layers. However,
current state-of-the-art feature-based attack methods typically require
additional computation costs for estimating the importance of neurons. To
address this challenge, we propose a Singular Value Decomposition (SVD)-based …

arxiv attack methods boosting box feature networks neural networks neuron research vulnerable

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US