Web: http://arxiv.org/abs/2111.11320

June 23, 2022, 1:11 a.m. | Hassan Ashtiani, Christopher Liaw

cs.LG updates on arXiv.org arxiv.org

We present a fairly general framework for reducing $(\varepsilon, \delta)$
differentially private (DP) statistical estimation to its non-private
counterpart. As the main application of this framework, we give a polynomial
time and $(\varepsilon,\delta)$-DP algorithm for learning (unrestricted)
Gaussian distributions in $\mathbb{R}^d$. The sample complexity of our approach
for learning the Gaussian up to total variation distance $\alpha$ is
$\widetilde{O}(d^2/\alpha^2 + d^2\sqrt{\ln(1/\delta)}/\alpha \varepsilon +
d\ln(1/\delta) / \alpha \varepsilon)$ matching (up to logarithmic factors) the
best known information-theoretic (non-efficient) sample complexity upper …

algorithms arxiv learning ml polynomial time

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY