all AI news
SoK: Reducing the Vulnerability of Fine-tuned Language Models to Membership Inference Attacks
March 14, 2024, 4:41 a.m. | Guy Amit, Abigail Goldsteen, Ariel Farkash
cs.LG updates on arXiv.org arxiv.org
Abstract: Natural language processing models have experienced a significant upsurge in recent years, with numerous applications being built upon them. Many of these applications require fine-tuning generic base models on customized, proprietary datasets. This fine-tuning data is especially likely to contain personal or sensitive information about individuals, resulting in increased privacy risk. Membership inference attacks are the most commonly employed attack to assess the privacy leakage of a machine learning model. However, limited research is available …
abstract applications arxiv attacks cs.lg data datasets fine-tuning inference language language models language processing natural natural language natural language processing processing proprietary proprietary datasets them type vulnerability
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Principal Data Engineering Manager
@ Microsoft | Redmond, Washington, United States
Machine Learning Engineer
@ Apple | San Diego, California, United States