all AI news
Generating Semantic Adversarial Examples via Feature Manipulation. (arXiv:2001.02297v2 [cs.LG] UPDATED)
May 23, 2022, 1:11 a.m. | Shuo Wang, Surya Nepal, Carsten Rudolph, Marthie Grobler, Shangyu Chen, Tianle Chen
stat.ML updates on arXiv.org arxiv.org
The vulnerability of deep neural networks to adversarial attacks has been
widely demonstrated (e.g., adversarial example attacks). Traditional attacks
perform unstructured pixel-wise perturbation to fool the classifier. An
alternative approach is to have perturbations in the latent space. However,
such perturbations are hard to control due to the lack of interpretability and
disentanglement. In this paper, we propose a more practical adversarial attack
by designing structured perturbation with semantic meanings. Our proposed
technique manipulates the semantic attributes of images via …
More from arxiv.org / stat.ML updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Software Engineer, Data Tools - Full Stack
@ DoorDash | Pune, India
Senior Data Analyst
@ Artsy | New York City