Web: http://arxiv.org/abs/2206.10469

June 23, 2022, 1:11 a.m. | Nicholas Carlini, Matthew Jagielski, Chiyuan Zhang, Nicolas Papernot, Andreas Terzis, Florian Tramer

cs.LG updates on arXiv.org arxiv.org

Machine learning models trained on private datasets have been shown to leak
their private data. While recent work has found that the average data point is
rarely leaked, the outlier samples are frequently subject to memorization and,
consequently, privacy leakage. We demonstrate and analyse an Onion Effect of
memorization: removing the "layer" of outlier points that are most vulnerable
to a privacy attack exposes a new layer of previously-safe points to the same
attack. We perform several experiments to study …

arxiv lg privacy

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY