all AI news
DP$^2$-VAE: Differentially Private Pre-trained Variational Autoencoders. (arXiv:2208.03409v2 [cs.LG] UPDATED)
Nov. 4, 2022, 1:12 a.m. | Dihong Jiang, Guojun Zhang, Mahdi Karami, Xi Chen, Yunfeng Shao, Yaoliang Yu
cs.LG updates on arXiv.org arxiv.org
Modern machine learning systems achieve great success when trained on large
datasets. However, these datasets usually contain sensitive information (e.g.
medical records, face images), leading to serious privacy concerns.
Differentially private generative models (DPGMs) emerge as a solution to
circumvent such privacy concerns by generating privatized sensitive data.
Similar to other differentially private (DP) learners, the major challenge for
DPGM is also how to achieve a subtle balance between utility and privacy. We
propose DP$^2$-VAE, a novel training mechanism for …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote