Web: http://arxiv.org/abs/2206.10032

June 24, 2022, 1:11 a.m. | Hossein Zakerinia, Shayan Talaei, Giorgi Nadiradze, Dan Alistarh

cs.LG updates on arXiv.org arxiv.org

Federated Learning (FL) is an emerging paradigm to enable the large-scale
distributed training of machine learning models, while still providing privacy
guarantees.


In this work, we jointly address two of the main practical challenges when
scaling federated optimization to large node counts: the need for tight
synchronization between the central authority and individual computing nodes,
and the large communication cost of transmissions between the central server
and clients.


Specifically, we present a new variant of the classic federated averaging
(FedAvg) …

arxiv communication lg

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY