s
March 5, 2024, 4:05 p.m. |

Simon Willison's Weblog simonwillison.net

I keep seeing people use the term "prompt injection" when they're actually talking about "jailbreaking".


This mistake is so common now that I'm not sure it's possible to correct course: language meaning (especially for recently coined terms) comes from how that language is used. I'm going to try anyway, because I think the distinction really matters.


Definitions


Prompt injection is a class of attacks against applications built on top of Large Language Models (LLMs) that work by concatenating untrusted user …

ai course generativeai jailbreak jailbreaking language llms meaning people prompt prompt injection promptinjection security terms

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Analyst

@ Alstom | Johannesburg, GT, ZA