March 7, 2024, 5:43 a.m. | Yun Lu, Malik Magdon-Ismail, Yu Wei, Vassilis Zikas

cs.LG updates on arXiv.org arxiv.org

arXiv:2309.01243v2 Announce Type: replace-cross
Abstract: To achieve differential privacy (DP) one typically randomizes the output of the underlying query. In big data analytics, one often uses randomized sketching/aggregation algorithms to make processing high-dimensional data tractable. Intuitively, such machine learning (ML) algorithms should provide some inherent privacy, yet most if not all existing DP mechanisms do not leverage this inherent randomness, resulting in potentially redundant noising.
The motivating question of our work is:
(How) can we improve the utility of DP …

abstract aggregation algorithms analytics application arxiv big big data big data analytics cs.cr cs.lg data data analytics differential differential privacy machine machine learning normal privacy processing query spectrum tractable type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Director, Clinical Data Science

@ Aura | Remote USA

Research Scientist, AI (PhD)

@ Meta | Menlo Park, CA | New York City