Web: http://arxiv.org/abs/1905.03871

May 11, 2022, 1:11 a.m. | Galen Andrew, Om Thakkar, H. Brendan McMahan, Swaroop Ramaswamy

cs.LG updates on arXiv.org arxiv.org

Existing approaches for training neural networks with user-level differential
privacy (e.g., DP Federated Averaging) in federated learning (FL) settings
involve bounding the contribution of each user's model update by clipping it to
some constant value. However there is no good a priori setting of the clipping
norm across tasks and learning settings: the update norm distribution depends
on the model architecture and loss, the amount of data on each device, the
client learning rate, and possibly various other parameters. We …

arxiv learning

More from arxiv.org / cs.LG updates on arXiv.org

Predictive Ecology Postdoctoral Fellow

@ Lawrence Berkeley National Lab | Berkeley, CA

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL