June 20, 2022, 1:13 a.m. | Wen Sun, Jian Jin, Weisi Lin

cs.CV updates on arXiv.org arxiv.org

Deep learning models are found to be vulnerable to adversarial examples, as
wrong predictions can be caused by small perturbation in input for deep
learning models. Most of the existing works of adversarial image generation try
to achieve attacks for most models, while few of them make efforts on
guaranteeing the perceptual quality of the adversarial examples. High quality
adversarial examples matter for many applications, especially for the privacy
preserving. In this work, we develop a framework based on the …

arxiv cv difference generation image image generation privacy

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Business Intelligence Developer / Analyst

@ Transamerica | Work From Home, USA

Data Analyst (All Levels)

@ Noblis | Bethesda, MD, United States