Web: http://arxiv.org/abs/2110.09074

Jan. 27, 2022, 2:11 a.m. | Jiahui Geng, Yongli Mou, Feifei Li, Qing Li, Oya Beyan, Stefan Decker, Chunming Rong

cs.LG updates on arXiv.org arxiv.org

Unlike traditional central training, federated learning (FL) improves the
performance of the global model by sharing and aggregating local models rather
than local data to protect the users' privacy. Although this training approach
appears secure, some research has demonstrated that an attacker can still
recover private data based on the shared gradient information. This on-the-fly
reconstruction attack deserves to be studied in depth because it can occur at
any stage of training, whether at the beginning or at the end …

arxiv deep federated learning learning

More from arxiv.org / cs.LG updates on arXiv.org

Senior Data Analyst

@ Fanatics Inc | Remote - New York

Data Engineer - Search

@ Cytora | United Kingdom - Remote

Product Manager, Technical - Data Infrastructure and Streaming

@ Nubank | Berlin

Postdoctoral Fellow: ML for autonomous materials discovery

@ Lawrence Berkeley National Lab | Berkeley, CA

Principal Data Scientist

@ Zuora | Remote

Data Engineer

@ Veeva Systems | Pennsylvania - Fort Washington