Web: http://arxiv.org/abs/2206.08638

June 20, 2022, 1:10 a.m. | Wen Sun, Jian Jin, Weisi Lin

cs.LG updates on arXiv.org arxiv.org

Deep learning models are found to be vulnerable to adversarial examples, as
wrong predictions can be caused by small perturbation in input for deep
learning models. Most of the existing works of adversarial image generation try
to achieve attacks for most models, while few of them make efforts on
guaranteeing the perceptual quality of the adversarial examples. High quality
adversarial examples matter for many applications, especially for the privacy
preserving. In this work, we develop a framework based on the …

arxiv cv difference generation image image generation privacy

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY