all AI news
How Much Does Each Datapoint Leak Your Privacy? Quantifying the Per-datum Membership Leakage
Feb. 16, 2024, 5:42 a.m. | Achraf Azize, Debabrota Basu
cs.LG updates on arXiv.org arxiv.org
Abstract: We study the per-datum Membership Inference Attacks (MIAs), where an attacker aims to infer whether a fixed target datum has been included in the input dataset of an algorithm and thus, violates privacy. First, we define the membership leakage of a datum as the advantage of the optimal adversary targeting to identify it. Then, we quantify the per-datum membership leakage for the empirical mean, and show that it depends on the Mahalanobis distance between the …
abstract algorithm arxiv attacks cs.cr cs.lg dataset inference leak math.st per privacy stat.ml stat.th study type
More from arxiv.org / cs.LG updates on arXiv.org
Testing the Segment Anything Model on radiology data
1 day, 18 hours ago |
arxiv.org
Calorimeter shower superresolution
1 day, 18 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US