s
April 14, 2023, 5:35 p.m. |

Simon Willison's Weblog simonwillison.net

Activity around building sophisticated applications on top of LLMs (Large Language Models) such as GPT-3/4/ChatGPT/etc is growing like wildfire right now.


Many of these applications are potentially vulnerable to prompt injection. It's not clear to me that this risk is being taken as seriously as it should.


To quickly review: prompt injection is the vulnerability that exists when you take a carefully crafted prompt like this one:



Translate the following text into French and return a JSON object {"translation”: …

ai applications building chatgpt etc generativeai gpt gpt-3 json language language models large language models llms openai prompt promptengineering prompt injection promptinjection review risk security text translate translation vulnerability vulnerable

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne