all AI news
Differentially Private Federated Learning without Noise Addition: When is it Possible?
May 9, 2024, 4:41 a.m. | Jiang Zhang, Yahya H Ezzeldin, Ahmed Roushdy Elkordy, Konstantinos Psounis, Salman Avestimehr
cs.LG updates on arXiv.org arxiv.org
Abstract: Federated Learning (FL) with Secure Aggregation (SA) has gained significant attention as a privacy preserving framework for training machine learning models while preventing the server from learning information about users' data from their individual encrypted model updates. Recent research has extended privacy guarantees of FL with SA by bounding the information leakage through the aggregate model over multiple training rounds thanks to leveraging the "noise" from other users' updates. However, the privacy metric used in …
abstract aggregation arxiv attention cs.cr cs.lg data federated learning framework information machine machine learning machine learning models noise privacy privacy preserving research server training type updates while
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Cloud Data Platform Engineer
@ First Central | Home Office (Remote)
Associate Director, Data Science
@ MSD | USA - New Jersey - Rahway
Data Scientist Sr.
@ MSD | CHL - Santiago - Santiago (Calle Mariano)