Web: http://arxiv.org/abs/2206.07136

June 16, 2022, 1:10 a.m. | Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis

cs.LG updates on arXiv.org arxiv.org

Per-example gradient clipping is a key algorithmic step that enables
practical differential private (DP) training for deep learning models. The
choice of clipping norm $R$, however, is shown to be vital for achieving high
accuracy under DP. We propose an easy-to-use replacement, called AutoClipping,
that eliminates the need to tune $R$ for any DP optimizers, including DP-SGD,
DP-Adam, DP-LAMB and many others. The automatic variants are as private and
computationally efficient as existing DP optimizers, but require no DP-specific
hyperparameters …

arxiv deep deep learning learning lg

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY