April 8, 2022, 1:11 a.m. | Bogdan Kulynych, Yao-Yuan Yang, Yaodong Yu, Jarosław Błasiok, Preetum Nakkiran

cs.LG updates on arXiv.org arxiv.org

We investigate and leverage a connection between Differential Privacy (DP)
and the recently proposed notion of Distributional Generalization (DG).
Applying this connection, we introduce new conceptual tools for designing
deep-learning methods that bypass "pathologies" of standard stochastic gradient
descent (SGD). First, we prove that differentially private methods satisfy a
"What You See Is What You Get (WYSIWYG)" generalization guarantee: whatever a
model does on its train data is almost exactly what it will do at test time.
This guarantee is …

algorithm algorithm design arxiv deep learning design learning

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Global Clinical Data Manager

@ Warner Bros. Discovery | CRI - San Jose - San Jose (City Place)

Global Clinical Data Manager

@ Warner Bros. Discovery | COL - Cundinamarca - Bogotá (Colpatria)

Ingénieur Data Manager / Pau

@ Capgemini | Paris, FR