April 4, 2024, 4:43 a.m. | Qin Liu, Fei Wang, Chaowei Xiao, Muhao Chen

cs.LG updates on arXiv.org arxiv.org

arXiv:2305.14910v3 Announce Type: replace-cross
Abstract: Language models are often at risk of diverse backdoor attacks, especially data poisoning. Thus, it is important to investigate defense solutions for addressing them. Existing backdoor defense methods mainly focus on backdoor attacks with explicit triggers, leaving a universal defense against various backdoor attacks with diverse triggers largely unexplored. In this paper, we propose an end-to-end ensemble-based backdoor defense framework, DPoE (Denoised Product-of-Experts), which is inspired by the shortcut nature of backdoor attacks, to defend …

abstract arxiv attacks backdoor cs.ai cs.cl cs.cr cs.lg data data poisoning defense diverse focus language language models poe risk solutions them type universal

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Business Intelligence Architect - Specialist

@ Eastman | Hyderabad, IN, 500 008