all AI news
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks
April 3, 2024, 4:42 a.m. | Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion
cs.LG updates on arXiv.org arxiv.org
Abstract: We show that even the most recent safety-aligned LLMs are not robust to simple adaptive jailbreaking attacks. First, we demonstrate how to successfully leverage access to logprobs for jailbreaking: we initially design an adversarial prompt template (sometimes adapted to the target LLM), and then we apply random search on a suffix to maximize the target logprob (e.g., of the token "Sure"), potentially with multiple restarts. In this way, we achieve nearly 100\% attack success rate …
arxiv attacks cs.ai cs.cr cs.lg jailbreaking llms safety simple stat.ml type
More from arxiv.org / cs.LG updates on arXiv.org
Efficient Data-Driven MPC for Demand Response of Commercial Buildings
2 days, 14 hours ago |
arxiv.org
Testing the Segment Anything Model on radiology data
2 days, 14 hours ago |
arxiv.org
Calorimeter shower superresolution
2 days, 14 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US