Feb. 13, 2024, 5:49 a.m. | Michael Duan Anshuman Suri Niloofar Mireshghallah Sewon Min Weijia Shi Luke Zettlemoyer Yulia Tsvetkov

cs.CL updates on arXiv.org arxiv.org

Membership inference attacks (MIAs) attempt to predict whether a particular datapoint is a member of a target model's training data. Despite extensive research on traditional machine learning models, there has been limited work studying MIA on the pre-training data of large language models (LLMs). We perform a large-scale evaluation of MIAs over a suite of language models (LMs) trained on the Pile, ranging from 160M to 12B parameters. We find that MIAs barely outperform random guessing for most settings across …

attacks cs.cl data evaluation inference language language models large language large language models llms machine machine learning machine learning models pre-training research scale studying traditional machine learning training training data work

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US