Feb. 9, 2024, 5:42 a.m. | Boyi Wei Kaixuan Huang Yangsibo Huang Tinghao Xie Xiangyu Qi Mengzhou Xia Prateek Mittal Mengd

cs.LG updates on arXiv.org arxiv.org

Large language models (LLMs) show inherent brittleness in their safety mechanisms, as evidenced by their susceptibility to jailbreaking and even non-malicious fine-tuning. This study explores this brittleness of safety alignment by leveraging pruning and low-rank modifications. We develop methods to identify critical regions that are vital for safety guardrails, and that are disentangled from utility-relevant regions at both the neuron and rank levels. Surprisingly, the isolated regions we find are sparse, comprising about $3\%$ at the parameter level and $2.5\%$ …

alignment cs.ai cs.cl cs.lg fine-tuning guardrails identify jailbreaking language language models large language large language models llms low pruning safety show study via vital

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote