all AI news
Teach LLMs to Phish: Stealing Private Information from Language Models
March 5, 2024, 2:43 p.m. | Ashwinee Panda, Christopher A. Choquette-Choo, Zhengming Zhang, Yaoqing Yang, Prateek Mittal
cs.LG updates on arXiv.org arxiv.org
Abstract: When large language models are trained on private data, it can be a significant privacy risk for them to memorize and regurgitate sensitive information. In this work, we propose a new practical data extraction attack that we call "neural phishing". This attack enables an adversary to target and extract sensitive or personally identifiable information (PII), e.g., credit card numbers, from a model trained on user data with upwards of 10% attack success rates, at times, …
abstract arxiv call cs.ai cs.cl cs.cr cs.lg data data extraction extraction information language language models large language large language models llms phishing practical privacy private data risk stealing them type work
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Data Scientist, Mid
@ Booz Allen Hamilton | DEU, Stuttgart (Kurmaecker St)
Tech Excellence Data Scientist
@ Booz Allen Hamilton | Undisclosed Location - USA, VA, Mclean