Feb. 15, 2024, 5:42 a.m. | Simon Geisler, Tom Wollschl\"ager, M. H. I. Abdalla, Johannes Gasteiger, Stephan G\"unnemann

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.09154v1 Announce Type: new
Abstract: Current LLM alignment methods are readily broken through specifically crafted adversarial prompts. While crafting adversarial prompts using discrete optimization is highly effective, such attacks typically use more than 100,000 LLM calls. This high computational cost makes them unsuitable for, e.g., quantitative analyses and adversarial training. To remedy this, we revisit Projected Gradient Descent (PGD) on the continuously relaxed input prompt. Although previous attempts with ordinary gradient-based attacks largely failed, we show that carefully controlling the …

abstract adversarial adversarial training alignment arxiv attacks computational cost cs.lg current gradient language language models large language large language models llm optimization projected gradient descent prompts quantitative them through training type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York