Feb. 26, 2024, 5:42 a.m. | Heegyu Kim, Sehyun Yuk, Hyunsouk Cho

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.15180v1 Announce Type: new
Abstract: Caution: This paper includes offensive words that could potentially cause unpleasantness. Language models (LMs) are vulnerable to exploitation for adversarial misuse. Training LMs for safety alignment is extensive and makes it hard to respond to fast-developing attacks immediately, such as jailbreaks. We propose self-refine with formatting that achieves outstanding safety even in non-safety-aligned LMs and evaluate our method alongside several defense baselines, demonstrating that it is the safest training-free method against jailbreak attacks. Additionally, we …

abstract adversarial alignment arxiv attacks caution cs.cl cs.cr cs.lg defense exploitation jailbreak language language models lms misuse paper refine safety training type vulnerable words

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne