April 8, 2022, 1:11 a.m. | Bogdan Kulynych, Yao-Yuan Yang, Yaodong Yu, Jarosław Błasiok, Preetum Nakkiran

cs.LG updates on arXiv.org arxiv.org

We investigate and leverage a connection between Differential Privacy (DP)
and the recently proposed notion of Distributional Generalization (DG).
Applying this connection, we introduce new conceptual tools for designing
deep-learning methods that bypass "pathologies" of standard stochastic gradient
descent (SGD). First, we prove that differentially private methods satisfy a
"What You See Is What You Get (WYSIWYG)" generalization guarantee: whatever a
model does on its train data is almost exactly what it will do at test time.
This guarantee is …

algorithm algorithm design arxiv deep learning design learning

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Scientist

@ ITE Management | New York City, United States