all AI news
UFID: A Unified Framework for Input-level Backdoor Detection on Diffusion Models
April 2, 2024, 7:43 p.m. | Zihan Guan, Mengxuan Hu, Sheng Li, Anil Vullikanti
cs.LG updates on arXiv.org arxiv.org
Abstract: Diffusion Models are vulnerable to backdoor attacks, where malicious attackers inject backdoors by poisoning some parts of the training samples during the training stage. This poses a serious threat to the downstream users, who query the diffusion models through the API or directly download them from the internet. To mitigate the threat of backdoor attacks, there have been a plethora of investigations on backdoor detections. However, none of them designed a specialized backdoor detection method …
arxiv backdoor cs.cr cs.cv cs.lg detection diffusion diffusion models framework type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Data Engineer
@ Kaseya | Bengaluru, Karnataka, India