all AI news
Uncertainty, Calibration, and Membership Inference Attacks: An Information-Theoretic Perspective
Feb. 19, 2024, 5:42 a.m. | Meiyi Zhu, Caili Guo, Chunyan Feng, Osvaldo Simeone
cs.LG updates on arXiv.org arxiv.org
Abstract: In a membership inference attack (MIA), an attacker exploits the overconfidence exhibited by typical machine learning models to determine whether a specific data point was used to train a target model. In this paper, we analyze the performance of the state-of-the-art likelihood ratio attack (LiRA) within an information-theoretical framework that allows the investigation of the impact of the aleatoric uncertainty in the true data generation process, of the epistemic uncertainty caused by a limited training …
abstract analyze art arxiv attacks cs.cr cs.it cs.lg data eess.sp exploits inference information likelihood machine machine learning machine learning models math.it paper performance perspective state train type uncertainty
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US