Feb. 28, 2024, 5:42 a.m. | Jeffrey G. Wang, Jason Wang, Marvin Li, Seth Neel

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.17012v1 Announce Type: cross
Abstract: In this paper we undertake a systematic study of privacy attacks against open source Large Language Models (LLMs), where an adversary has access to either the model weights, gradients, or losses, and tries to exploit them to learn something about the underlying training data. Our headline results are the first membership inference attacks (MIAs) against pre-trained LLMs that are able to simultaneously achieve high TPRs and low FPRs, and a pipeline showing that over $50\%$ …

abstract arxiv attacks box cs.ai cs.cr cs.lg data data leakage exploit language language models large language large language models learn llms losses open source pandora paper privacy something study them training training data type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Scientist

@ Publicis Groupe | New York City, United States

Bigdata Cloud Developer - Spark - Assistant Manager

@ State Street | Hyderabad, India