April 2, 2024, 7:43 p.m. | Zihan Guan, Mengxuan Hu, Sheng Li, Anil Vullikanti

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.01101v1 Announce Type: cross
Abstract: Diffusion Models are vulnerable to backdoor attacks, where malicious attackers inject backdoors by poisoning some parts of the training samples during the training stage. This poses a serious threat to the downstream users, who query the diffusion models through the API or directly download them from the internet. To mitigate the threat of backdoor attacks, there have been a plethora of investigations on backdoor detections. However, none of them designed a specialized backdoor detection method …

arxiv backdoor cs.cr cs.cv cs.lg detection diffusion diffusion models framework type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Data Engineer

@ Kaseya | Bengaluru, Karnataka, India