all AI news
AdvDiff: Generating Unrestricted Adversarial Examples using Diffusion Models
Feb. 28, 2024, 5:43 a.m. | Xuelong Dai, Kaisheng Liang, Bin Xiao
cs.LG updates on arXiv.org arxiv.org
Abstract: Unrestricted adversarial attacks present a serious threat to deep learning models and adversarial defense techniques. They pose severe security problems for deep learning applications because they can effectively bypass defense mechanisms. However, previous attack methods often utilize Generative Adversarial Networks (GANs), which are not theoretically provable and thus generate unrealistic examples by incorporating adversarial objectives, especially for large-scale datasets like ImageNet. In this paper, we propose a new method, called AdvDiff, to generate unrestricted adversarial …
abstract adversarial adversarial attacks adversarial examples applications arxiv attack methods attacks cs.cv cs.lg deep learning defense diffusion diffusion models examples gans generative generative adversarial networks networks security threat type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Intern Large Language Models Planning (f/m/x)
@ BMW Group | Munich, DE
Data Engineer Analytics
@ Meta | Menlo Park, CA | Remote, US