all AI news
AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models
March 22, 2024, 4:48 a.m. | Xiaogeng Liu, Nan Xu, Muhao Chen, Chaowei Xiao
cs.CL updates on arXiv.org arxiv.org
Abstract: The aligned Large Language Models (LLMs) are powerful language understanding and decision-making tools that are created through extensive alignment with human feedback. However, these large models remain susceptible to jailbreak attacks, where adversaries manipulate prompts to elicit malicious outputs that should not be given by aligned LLMs. Investigating jailbreak prompts can lead us to delve into the limitations of LLMs and further guide us to secure them. Unfortunately, existing jailbreak techniques suffer from either (1) …
abstract alignment arxiv attacks cs.ai cs.cl decision feedback however human human feedback jailbreak language language models language understanding large language large language models large models llms making prompts through tools type understanding
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York