March 6, 2024, 5:47 a.m. | Qiusi Zhan, Zhixiang Liang, Zifan Ying, Daniel Kang

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.02691v1 Announce Type: new
Abstract: Recent work has embodied LLMs as agents, allowing them to access tools, perform actions, and interact with external content (e.g., emails or websites). However, external content introduces the risk of indirect prompt injection (IPI) attacks, where malicious instructions are embedded within the content processed by LLMs, aiming to manipulate these agents into executing detrimental actions against users. Given the potentially severe consequences of such attacks, establishing benchmarks to assess and mitigate these risks is imperative. …

abstract agents arxiv attacks benchmarking cs.cl cs.cr emails embedded embodied language language model large language large language model llms prompt prompt injection prompt injections risk them tool tools type websites work

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

#13721 - Data Engineer - AI Model Testing

@ Qualitest | Miami, Florida, United States

Elasticsearch Administrator

@ ManTech | 201BF - Customer Site, Chantilly, VA