all AI news
Prompt injection: What's the worst that can happen?
Simon Willison's Weblog simonwillison.net
Activity around building sophisticated applications on top of LLMs (Large Language Models) such as GPT-3/4/ChatGPT/etc is growing like wildfire right now.
Many of these applications are potentially vulnerable to prompt injection. It's not clear to me that this risk is being taken as seriously as it should.
To quickly review: prompt injection is the vulnerability that exists when you take a carefully crafted prompt like this one:
Translate the following text into French and return a JSON object {"translation”: …
ai applications building chatgpt etc generativeai gpt gpt-3 json language language models large language models llms openai prompt promptengineering prompt injection promptinjection review risk security text translate translation vulnerability vulnerable