all AI news
Finding Safety Neurons in Large Language Models
June 21, 2024, 4:42 a.m. | Jianhui Chen, Xiaozhi Wang, Zijun Yao, Yushi Bai, Lei Hou, Juanzi Li
cs.CL updates on arXiv.org arxiv.org
Abstract: Large language models (LLMs) excel in various capabilities but also pose safety risks such as generating harmful content and misinformation, even after safety alignment. In this paper, we explore the inner mechanisms of safety alignment from the perspective of mechanistic interpretability, focusing on identifying and analyzing safety neurons within LLMs that are responsible for safety behaviors. We propose generation-time activation contrasting to locate these neurons and dynamic activation patching to evaluate their causal effects. Experiments …
abstract alignment arxiv capabilities cs.ai cs.cl cs.lg excel explore interpretability language language models large language large language models llms misinformation neurons paper perspective risks safety safety risks type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
AI Focused Biochemistry Postdoctoral Fellow
@ Lawrence Berkeley National Lab | Berkeley, CA
Senior Data Engineer
@ Displate | Warsaw
Solutions Architect
@ PwC | Bucharest - 1A Poligrafiei Boulevard
Research Fellow (Social and Cognition Factors, CLIC)
@ Nanyang Technological University | NTU Main Campus, Singapore
Research Aide - Research Aide I - Department of Psychology
@ Cornell University | Ithaca (Main Campus)
Technical Architect - SMB/Desk
@ Salesforce | Ireland - Dublin