all AI news
GitHub Copilot Chat: From Prompt Injection to Data Exfiltration
Simon Willison's Weblog simonwillison.net
GitHub Copilot Chat: From Prompt Injection to Data Exfiltration
Yet another example of the same vulnerability we see time and time again.
If you build an LLM-based chat interface that gets exposed to both private and untrusted data (in this case the code in VS Code that Copilot Chat can see) and your chat interface supports Markdown images, you have a data exfiltration prompt injection vulnerability.
The fix, applied by GitHub here, is to disable Markdown image references to untrusted …
ai build case chat code copilot copilot chat data data exfiltration example generativeai github github copilot chat llm llms markdown markdownexfiltration prompt prompt injection promptinjection security vs code vulnerability you