June 3, 2022, 1:11 a.m. | Junyuan Hong, Zhangyang Wang, Jiayu Zhou

cs.LG updates on arXiv.org arxiv.org

Protecting privacy in learning while maintaining the model performance has
become increasingly critical in many applications that involve sensitive data.
A popular private learning framework is differentially private learning
composed of many privatized gradient iterations by noising and clipping. Under
the privacy constraint, it has been shown that the dynamic policies could
improve the final iterate loss, namely the quality of published models. In this
talk, we will introduce these dynamic techniques for learning rate, batch size,
noise magnitude and …

arxiv budget data efficiency gradient privacy

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Associate (Data Science/Information Engineering/Applied Mathematics/Information Technology)

@ Nanyang Technological University | NTU Main Campus, Singapore

Associate Director of Data Science and Analytics

@ Penn State University | Penn State University Park

Student Worker- Data Scientist

@ TransUnion | Israel - Tel Aviv

Vice President - Customer Segment Analytics Data Science Lead

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India

Middle/Senior Data Engineer

@ Devexperts | Sofia, Bulgaria