Feb. 15, 2024, 5:42 a.m. | Tao Yu, Congzheng Song, Jianyu Wang, Mona Chitnis

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.09247v1 Announce Type: new
Abstract: Asynchronous protocols have been shown to improve the scalability of federated learning (FL) with a massive number of clients. Meanwhile, momentum-based methods can achieve the best model quality in synchronous FL. However, naively applying momentum in asynchronous FL algorithms leads to slower convergence and degraded model performance. It is still unclear how to effective combinie these two techniques together to achieve a win-win. In this paper, we find that asynchrony introduces implicit bias to momentum …

abstract algorithms approximation arxiv asynchronous convergence cs.lg federated learning leads massive performance quality scalability type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Science Analyst

@ Mayo Clinic | AZ, United States

Sr. Data Scientist (Network Engineering)

@ SpaceX | Redmond, WA