all AI news
Probing Unlearned Diffusion Models: A Transferable Adversarial Attack Perspective
May 1, 2024, 4:45 a.m. | Xiaoxuan Han, Songlin Yang, Wei Wang, Yang Li, Jing Dong
cs.CV updates on arXiv.org arxiv.org
Abstract: Advanced text-to-image diffusion models raise safety concerns regarding identity privacy violation, copyright infringement, and Not Safe For Work content generation. Towards this, unlearning methods have been developed to erase these involved concepts from diffusion models. However, these unlearning methods only shift the text-to-image mapping and preserve the visual content within the generative space of diffusion models, leaving a fatal flaw for restoring these erased concepts. This erasure trustworthiness problem needs probe, but previous methods are …
abstract advanced adversarial arxiv concepts concerns content generation copyright copyright infringement cs.cv diffusion diffusion models however identity image image diffusion infringement mapping perspective privacy raise safe safety shift text text-to-image type unlearning work
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US