March 7, 2024, 5:46 a.m. | Boyang Zheng, Chumeng Liang, Xiaoyu Wu, Yan Liu

cs.CV updates on arXiv.org arxiv.org

arXiv:2310.04687v3 Announce Type: replace
Abstract: Adversarial attacks on Latent Diffusion Model (LDM), the state-of-the-art image generative model, have been adopted as effective protection against malicious finetuning of LDM on unauthorized images. We show that these attacks add an extra error to the score function of adversarial examples predicted by LDM. LDM finetuned on these adversarial examples learns to lower the error by a bias, from which the model is attacked and predicts the score function with biases.
Based on the …

abstract adversarial adversarial attacks adversarial examples art arxiv attacks cs.ai cs.cv diffusion diffusion model error examples extra finetuning function generative image images ldm protection show state type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Data Engineering Manager

@ Microsoft | Redmond, Washington, United States

Machine Learning Engineer

@ Apple | San Diego, California, United States