Feb. 15, 2024, 5:43 a.m. | Andrew Lowy, Zeman Li, Tianjian Huang, Meisam Razaviyayn

cs.LG updates on arXiv.org arxiv.org

arXiv:2306.15056v2 Announce Type: replace
Abstract: Differential privacy (DP) ensures that training a machine learning model does not leak private data. In practice, we may have access to auxiliary public data that is free of privacy concerns. In this work, we assume access to a given amount of public data and settle the following fundamental open questions: 1. What is the optimal (worst-case) error of a DP model trained over a private data set while having access to side public data? …

abstract arxiv concerns cs.cr cs.lg data differential differential privacy free leak machine machine learning machine learning model math.oc practice privacy private data public public data stat.ml training type work

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US