April 24, 2024, 4:42 a.m. | Javier Rando, Francesco Croce, Kry\v{s}tof Mitka, Stepan Shabalin, Maksym Andriushchenko, Nicolas Flammarion, Florian Tram\`er

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.14461v1 Announce Type: cross
Abstract: Large language models are aligned to be safe, preventing users from generating harmful content like misinformation or instructions for illegal activities. However, previous work has shown that the alignment process is vulnerable to poisoning attacks. Adversaries can manipulate the safety training data to inject backdoors that act like a universal sudo command: adding the backdoor string to any prompt enables harmful responses from models that, otherwise, behave safely. Our competition, co-located at IEEE SaTML 2024, …

abstract alignment arxiv attacks competition cs.ai cs.cl cs.cr cs.lg data however jailbreak language language models large language large language models llms misinformation poisoning attacks process report safe safety training training data type universal vulnerable work

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Codec Avatars Research Engineer

@ Meta | Pittsburgh, PA