Web: http://arxiv.org/abs/1905.03871

May 11, 2022, 1:10 a.m. | Galen Andrew, Om Thakkar, H. Brendan McMahan, Swaroop Ramaswamy

stat.ML updates on arXiv.org arxiv.org

Existing approaches for training neural networks with user-level differential
privacy (e.g., DP Federated Averaging) in federated learning (FL) settings
involve bounding the contribution of each user's model update by clipping it to
some constant value. However there is no good a priori setting of the clipping
norm across tasks and learning settings: the update norm distribution depends
on the model architecture and loss, the amount of data on each device, the
client learning rate, and possibly various other parameters. We …

arxiv learning

More from arxiv.org / stat.ML updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California