March 7, 2024, 5:46 a.m. | Boyang Zheng, Chumeng Liang, Xiaoyu Wu, Yan Liu

cs.CV updates on arXiv.org arxiv.org

arXiv:2310.04687v3 Announce Type: replace
Abstract: Adversarial attacks on Latent Diffusion Model (LDM), the state-of-the-art image generative model, have been adopted as effective protection against malicious finetuning of LDM on unauthorized images. We show that these attacks add an extra error to the score function of adversarial examples predicted by LDM. LDM finetuned on these adversarial examples learns to lower the error by a bias, from which the model is attacked and predicts the score function with biases.
Based on the …

abstract adversarial adversarial attacks adversarial examples art arxiv attacks cs.ai cs.cv diffusion diffusion model error examples extra finetuning function generative image images ldm protection show state type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US