s
Dec. 15, 2023, 8:12 p.m. |

Simon Willison's Weblog simonwillison.net

Data exfiltration from Writer.com with indirect prompt injection


This is a nasty one. Writer.com call themselves a "secure enterprise generative AI platform", offering collaborative generative AI writing assistance and question answering that can integrate with your company's private data.

If this sounds like a recipe for prompt injection vulnerabilities, it is.

Kai Greshake and PromptArmor found exactly that. They identified a classic data exfiltration hole: Writer can summarize documents fetched from the web, so they hid the following instruction in …

ai data hacker llms prompt prompt injection promptinjection security writer writer.com

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Quantexa | Sydney, New South Wales, Australia

Staff Analytics Engineer

@ Warner Bros. Discovery | NY New York 230 Park Avenue South