all AI news
Differentially Private Learning with Adaptive Clipping. (arXiv:1905.03871v5 [cs.LG] UPDATED)
Web: http://arxiv.org/abs/1905.03871
May 11, 2022, 1:11 a.m. | Galen Andrew, Om Thakkar, H. Brendan McMahan, Swaroop Ramaswamy
cs.LG updates on arXiv.org arxiv.org
Existing approaches for training neural networks with user-level differential
privacy (e.g., DP Federated Averaging) in federated learning (FL) settings
involve bounding the contribution of each user's model update by clipping it to
some constant value. However there is no good a priori setting of the clipping
norm across tasks and learning settings: the update norm distribution depends
on the model architecture and loss, the amount of data on each device, the
client learning rate, and possibly various other parameters. We …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Predictive Ecology Postdoctoral Fellow
@ Lawrence Berkeley National Lab | Berkeley, CA
Data Analyst, Patagonia Action Works
@ Patagonia | Remote
Data & Insights Strategy & Innovation General Manager
@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX
Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis
@ Ahmedabad University | Ahmedabad, India
Director, Applied Mathematics & Computational Research Division
@ Lawrence Berkeley National Lab | Berkeley, Ca
Business Data Analyst
@ MainStreet Family Care | Birmingham, AL