Web: http://arxiv.org/abs/2205.02973

May 9, 2022, 1:10 a.m. | Harsh Mehta, Abhradeep Thakurta, Alexey Kurakin, Ashok Cutkosky

cs.CV updates on arXiv.org arxiv.org

Differential Privacy (DP) provides a formal framework for training machine
learning models with individual example level privacy. Training models with DP
protects the model against leakage of sensitive data in a potentially
adversarial setting. In the field of deep learning, Differentially Private
Stochastic Gradient Descent (DP-SGD) has emerged as a popular private training
algorithm. Private training using DP-SGD protects against leakage by injecting
noise into individual example gradients, such that the trained model weights
become nearly independent of the use …

arxiv classification image learning scale transfer transfer learning

More from arxiv.org / cs.CV updates on arXiv.org

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC