all AI news
Optimal Differentially Private Model Training with Public Data
Feb. 15, 2024, 5:43 a.m. | Andrew Lowy, Zeman Li, Tianjian Huang, Meisam Razaviyayn
cs.LG updates on arXiv.org arxiv.org
Abstract: Differential privacy (DP) ensures that training a machine learning model does not leak private data. In practice, we may have access to auxiliary public data that is free of privacy concerns. In this work, we assume access to a given amount of public data and settle the following fundamental open questions: 1. What is the optimal (worst-case) error of a DP model trained over a private data set while having access to side public data? …
abstract arxiv concerns cs.cr cs.lg data differential differential privacy free leak machine machine learning machine learning model math.oc practice privacy private data public public data stat.ml training type work
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Director, Clinical Data Science
@ Aura | Remote USA
Research Scientist, AI (PhD)
@ Meta | Menlo Park, CA | New York City