all AI news
Is Diffusion Model Safe? Severe Data Leakage via Gradient-Guided Diffusion Model
June 17, 2024, 4:46 a.m. | Jiayang Meng, Tao Huang, Hong Chen, Cuiping Li
cs.CV updates on arXiv.org arxiv.org
Abstract: Gradient leakage has been identified as a potential source of privacy breaches in modern image processing systems, where the adversary can completely reconstruct the training images from leaked gradients. However, existing methods are restricted to reconstructing low-resolution images where data leakage risks of image processing systems are not sufficiently explored. In this paper, by exploiting diffusion models, we propose an innovative gradient-guided fine-tuning method and introduce a new reconstruction attack that is capable of stealing …
abstract arxiv breaches cs.cr cs.cv data data leakage diffusion diffusion model gradient however image image processing images leaked low modern potential privacy processing resolution risks safe systems training type via
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
AI Focused Biochemistry Postdoctoral Fellow
@ Lawrence Berkeley National Lab | Berkeley, CA
Senior Data Engineer
@ Displate | Warsaw
PhD Student AI simulation electric drive (f/m/d)
@ Volkswagen Group | Kassel, DE, 34123
AI Privacy Research Lead
@ Leidos | 6314 Remote/Teleworker US
Senior Platform System Architect, Silicon
@ Google | New Taipei, Banqiao District, New Taipei City, Taiwan
Fabrication Hardware Litho Engineer, Quantum AI
@ Google | Goleta, CA, USA