all AI news
Differentially Private Optimization with Sparse Gradients
April 18, 2024, 4:43 a.m. | Badih Ghazi, Crist\'obal Guzm\'an, Pritish Kamath, Ravi Kumar, Pasin Manurangsi
stat.ML updates on arXiv.org arxiv.org
Abstract: Motivated by applications of large embedding models, we study differentially private (DP) optimization problems under sparsity of individual gradients. We start with new near-optimal bounds for the classic mean estimation problem but with sparse data, improving upon existing algorithms particularly for the high-dimensional regime. Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for stochastic convex optimization with sparse gradients; the former represents the first nearly dimension-independent rates for this problem. …
abstract algorithms applications arxiv building cs.lg data embedding embedding models improving math.oc mean near optimization sparsity stat.ml study type
More from arxiv.org / stat.ML updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Associate Data Engineer
@ Nominet | Oxford/ Hybrid, GB
Data Science Senior Associate
@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India