Web: http://arxiv.org/abs/2206.05199

June 16, 2022, 1:11 a.m. | Santiago Zanella-Béguelin (Microsoft Research), Lukas Wutschitz (Microsoft), Shruti Tople (Microsoft Research), Ahmed Salem (Microsoft Research),

cs.LG updates on arXiv.org arxiv.org

Algorithms such as Differentially Private SGD enable training machine
learning models with formal privacy guarantees. However, there is a discrepancy
between the protection that such algorithms guarantee in theory and the
protection they afford in practice. An emerging strand of work empirically
estimates the protection afforded by differentially private training as a
confidence interval for the privacy budget $\varepsilon$ spent on training a
model. Existing approaches derive confidence intervals for $\varepsilon$ from
confidence intervals for the false positive and false …

arxiv bayesian differential privacy lg privacy

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY