Feb. 2, 2024, 3:47 p.m. | Mark Bun Gautam Kamath Argyris Mouzakis Vikrant Singhal

stat.ML updates on arXiv.org arxiv.org

We give an example of a class of distributions that is learnable in total variation distance with a finite number of samples, but not learnable under $(\varepsilon, \delta)$-differential privacy. This refutes a conjecture of Ashtiani.

class conjecture cs.cr cs.ds delta differential differential privacy distribution example privacy samples stat.ml total variation

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne