s
June 16, 2024, 12:35 a.m. |

Simon Willison's Weblog simonwillison.net

GitHub Copilot Chat: From Prompt Injection to Data Exfiltration


Yet another example of the same vulnerability we see time and time again.

If you build an LLM-based chat interface that gets exposed to both private and untrusted data (in this case the code in VS Code that Copilot Chat can see) and your chat interface supports Markdown images, you have a data exfiltration prompt injection vulnerability.


The fix, applied by GitHub here, is to disable Markdown image references to untrusted …

ai build case chat code copilot copilot chat data data exfiltration example generativeai github github copilot chat llm llms markdown markdownexfiltration prompt prompt injection promptinjection security vs code vulnerability you

Senior Data Engineer

@ Displate | Warsaw

Director of Data Science (f/m/x)

@ AUTO1 Group | Berlin, Germany

Business Intelligence Analyst I [BI Analyst I]

@ Capitec Bank | Stellenbosch, Western Cape, ZA

Data Governance Associate Director

@ Publicis Groupe | London, United Kingdom

Technical Lead - Power BI

@ Birlasoft | INDIA - PUNE - BIRLASOFT OFFICE - HINJAWADI, IN

Data Analyst

@ FirstRand Corporate Centre | 1 First Place, Cnr Simmonds & Pritchard Streets, Johannesburg, 2001