March 5, 2024, 2:53 p.m. | Sander Schulhoff, Jeremy Pinto, Anaum Khan, Louis-Fran\c{c}ois Bouchard, Chenglei Si, Svetlina Anati, Valen Tagliabue, Anson Liu Kost, Christopher Car

cs.CL updates on arXiv.org arxiv.org

arXiv:2311.16119v3 Announce Type: replace-cross
Abstract: Large Language Models (LLMs) are deployed in interactive contexts with direct user engagement, such as chatbots and writing assistants. These deployments are vulnerable to prompt injection and jailbreaking (collectively, prompt hacking), in which models are manipulated to ignore their original instructions and follow potentially malicious ones. Although widely acknowledged as a significant security threat, there is a dearth of large-scale resources and quantitative studies on prompt hacking. To address this lacuna, we launch a global …

abstract arxiv assistants chatbots competition cs.ai cs.cl cs.cr deployments engagement global hacking interactive jailbreaking language language models large language large language models llms prompt prompt injection scale through type user engagement vulnerabilities vulnerable writing

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

#13721 - Data Engineer - AI Model Testing

@ Qualitest | Miami, Florida, United States

Elasticsearch Administrator

@ ManTech | 201BF - Customer Site, Chantilly, VA