Web: http://arxiv.org/abs/2201.12328

Jan. 31, 2022, 2:11 a.m. | Alexey Kurakin, Steve Chien, Shuang Song, Roxana Geambasu, Andreas Terzis, Abhradeep Thakurta

cs.LG updates on arXiv.org arxiv.org

Differential privacy (DP) is the de facto standard for training machine
learning (ML) models, including neural networks, while ensuring the privacy of
individual examples in the training set. Despite a rich literature on how to
train ML models with differential privacy, it remains extremely challenging to
train real-life, large neural networks with both reasonable accuracy and
privacy.


We set out to investigate how to do this, using ImageNet image classification
as a poster example of an ML task that is …

arxiv differential privacy imagenet privacy scale training

More from arxiv.org / cs.LG updates on arXiv.org

Data Analyst, Credit Risk

@ Stripe | US Remote

Senior Data Engineer

@ Snyk | Cluj, Romania, or Remote

Senior Software Engineer (C++), Autonomy Visualization

@ Nuro, Inc. | Mountain View, California (HQ)

Machine Learning Intern (January 2023)

@ Cohere | Toronto, Palo Alto, San Francisco, London

Senior Machine Learning Engineer, Reinforcement Learning, Personalization

@ Spotify | New York, NY

AWS Data Engineer

@ ProCogia | Seattle