Feb. 15, 2024, 5:43 a.m. | Andrew Lowy, Zeman Li, Tianjian Huang, Meisam Razaviyayn

cs.LG updates on arXiv.org arxiv.org

arXiv:2306.15056v2 Announce Type: replace
Abstract: Differential privacy (DP) ensures that training a machine learning model does not leak private data. In practice, we may have access to auxiliary public data that is free of privacy concerns. In this work, we assume access to a given amount of public data and settle the following fundamental open questions: 1. What is the optimal (worst-case) error of a DP model trained over a private data set while having access to side public data? …

abstract arxiv concerns cs.cr cs.lg data differential differential privacy free leak machine machine learning machine learning model math.oc practice privacy private data public public data stat.ml training type work

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Director, Clinical Data Science

@ Aura | Remote USA

Research Scientist, AI (PhD)

@ Meta | Menlo Park, CA | New York City