Feb. 19, 2024, 5:42 a.m. | Meiyi Zhu, Caili Guo, Chunyan Feng, Osvaldo Simeone

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.10686v1 Announce Type: cross
Abstract: In a membership inference attack (MIA), an attacker exploits the overconfidence exhibited by typical machine learning models to determine whether a specific data point was used to train a target model. In this paper, we analyze the performance of the state-of-the-art likelihood ratio attack (LiRA) within an information-theoretical framework that allows the investigation of the impact of the aleatoric uncertainty in the true data generation process, of the epistemic uncertainty caused by a limited training …

abstract analyze art arxiv attacks cs.cr cs.it cs.lg data eess.sp exploits inference information likelihood machine machine learning machine learning models math.it paper performance perspective state train type uncertainty

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US