March 7, 2024, 5:43 a.m. | Yun Lu, Malik Magdon-Ismail, Yu Wei, Vassilis Zikas

cs.LG updates on arXiv.org arxiv.org

arXiv:2309.01243v2 Announce Type: replace-cross
Abstract: To achieve differential privacy (DP) one typically randomizes the output of the underlying query. In big data analytics, one often uses randomized sketching/aggregation algorithms to make processing high-dimensional data tractable. Intuitively, such machine learning (ML) algorithms should provide some inherent privacy, yet most if not all existing DP mechanisms do not leverage this inherent randomness, resulting in potentially redundant noising.
The motivating question of our work is:
(How) can we improve the utility of DP …

abstract aggregation algorithms analytics application arxiv big big data big data analytics cs.cr cs.lg data data analytics differential differential privacy machine machine learning normal privacy processing query spectrum tractable type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York