Web: http://arxiv.org/abs/2205.10683

June 23, 2022, 1:11 a.m. | Zhiqi Bu, Jialin Mao, Shiyun Xu

cs.LG updates on arXiv.org arxiv.org

Large convolutional neural networks (CNN) can be difficult to train in the
differentially private (DP) regime, since the optimization algorithms require a
computationally expensive operation, known as the per-sample gradient clipping.
We propose an efficient and scalable implementation of this clipping on
convolutional layers, termed as the mixed ghost clipping, that significantly
eases the private training in terms of both time and space complexities,
without affecting the accuracy. The improvement in efficiency is rigorously
studied through the first complexity analysis …

arxiv convolutional neural networks differential privacy lg networks neural neural networks privacy scalable training

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY