s
March 3, 2024, 4:34 p.m. |

Simon Willison's Weblog simonwillison.net

Who Am I? Conditional Prompt Injection Attacks with Microsoft Copilot


New prompt injection variant from Johann Rehberger, demonstrated against Microsoft Copilot. If the LLM tool you are interacting with has awareness of the identity of the current user you can create targeted prompt injection attacks which only activate when an exploit makes it into the token context of a specific individual.


Via @wunderwuzzi23

ai attacks copilot current identity llm llms microsoft microsoft copilot prompt prompt injection promptinjection prompt injection attacks security tool

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

AIML - Sr Machine Learning Engineer, Data and ML Innovation

@ Apple | Seattle, WA, United States

Senior Data Engineer

@ Palta | Palta Cyprus, Palta Warsaw, Palta remote