Web: http://arxiv.org/abs/2204.13650

June 17, 2022, 1:12 a.m. | Soham De, Leonard Berrada, Jamie Hayes, Samuel L. Smith, Borja Balle

stat.ML updates on arXiv.org arxiv.org

Differential Privacy (DP) provides a formal privacy guarantee preventing
adversaries with access to a machine learning model from extracting information
about individual training points. Differentially Private Stochastic Gradient
Descent (DP-SGD), the most popular DP training method for deep learning,
realizes this protection by injecting noise during training. However previous
works have found that DP-SGD often leads to a significant degradation in
performance on standard image classification benchmarks. Furthermore, some
authors have postulated that DP-SGD inherently performs poorly on large models, …

accuracy arxiv classification image lg scale

More from arxiv.org / stat.ML updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY