March 8, 2024, 5:42 a.m. | Nico Manzonelli, Wanrong Zhang, Salil Vadhan

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.04451v1 Announce Type: cross
Abstract: Recent research shows that large language models are susceptible to privacy attacks that infer aspects of the training data. However, it is unclear if simpler generative models, like topic models, share similar vulnerabilities. In this work, we propose an attack against topic models that can confidently identify members of the training data in Latent Dirichlet Allocation. Our results suggest that the privacy risks associated with generative modeling are not restricted to large neural models. Additionally, …

abstract arxiv attacks cs.cl cs.cr cs.lg data generative generative models however identify inference language language models large language large language models modeling privacy research shows topic modeling training training data type vulnerabilities work

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

#13721 - Data Engineer - AI Model Testing

@ Qualitest | Miami, Florida, United States

Elasticsearch Administrator

@ ManTech | 201BF - Customer Site, Chantilly, VA