all AI news
Differentially Private Learning with Adaptive Clipping. (arXiv:1905.03871v5 [cs.LG] UPDATED)
Web: http://arxiv.org/abs/1905.03871
May 11, 2022, 1:10 a.m. | Galen Andrew, Om Thakkar, H. Brendan McMahan, Swaroop Ramaswamy
stat.ML updates on arXiv.org arxiv.org
Existing approaches for training neural networks with user-level differential
privacy (e.g., DP Federated Averaging) in federated learning (FL) settings
involve bounding the contribution of each user's model update by clipping it to
some constant value. However there is no good a priori setting of the clipping
norm across tasks and learning settings: the update norm distribution depends
on the model architecture and loss, the amount of data on each device, the
client learning rate, and possibly various other parameters. We …
More from arxiv.org / stat.ML updates on arXiv.org
Variational Hyper-Encoding Networks. (arXiv:2005.08482v2 [stat.ML] UPDATED)
1 day, 13 hours ago |
arxiv.org
Latest AI/ML/Big Data Jobs
Director, Applied Mathematics & Computational Research Division
@ Lawrence Berkeley National Lab | Berkeley, Ca
Business Data Analyst
@ MainStreet Family Care | Birmingham, AL
Assistant/Associate Professor of the Practice in Business Analytics
@ Georgetown University McDonough School of Business | Washington DC
Senior Data Science Writer
@ NannyML | Remote
Director of AI/ML Engineering
@ Armis Industries | Remote (US only), St. Louis, California
Digital Analytics Manager
@ Patagonia | Ventura, California