March 25, 2024, 4:42 a.m. | Taeyoun Kim, Suhas Kotha, Aditi Raghunathan

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.14725v1 Announce Type: cross
Abstract: The rise of "jailbreak" attacks on language models has led to a flurry of defenses aimed at preventing the output of undesirable responses. In this work, we critically examine the two stages of the defense pipeline: (i) the definition of what constitutes unsafe outputs, and (ii) the enforcement of the definition via methods such as input processing or fine-tuning. We cast severe doubt on the efficacy of existing enforcement mechanisms by showing that they fail …

abstract arxiv attacks cs.cl cs.cr cs.lg defense definition jailbreak jailbreaking language language models pipeline responses type work

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Machine Learning Research Scientist

@ d-Matrix | San Diego, Ca