all AI news
BadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13B
March 25, 2024, 4:47 a.m. | Pranav Gade, Simon Lermen, Charlie Rogers-Smith, Jeffrey Ladish
cs.CL updates on arXiv.org arxiv.org
Abstract: Llama 2-Chat is a collection of large language models that Meta developed and released to the public. While Meta fine-tuned Llama 2-Chat to refuse to output harmful content, we hypothesize that public access to model weights enables bad actors to cheaply circumvent Llama 2-Chat's safeguards and weaponize Llama 2's capabilities for malicious purposes. We demonstrate that it is possible to effectively undo the safety fine-tuning from Llama 2-Chat 13B with less than $200, while retaining …
13b abstract actors arxiv chat collection cs.cl fine-tuning language language models large language large language models llama llama 2 meta public safeguards safety type
More from arxiv.org / cs.CL updates on arXiv.org
Benchmarking LLMs via Uncertainty Quantification
1 day, 19 hours ago |
arxiv.org
CARE: Extracting Experimental Findings From Clinical Literature
1 day, 19 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Director, Clinical Data Science
@ Aura | Remote USA
Research Scientist, AI (PhD)
@ Meta | Menlo Park, CA | New York City