all AI news
Unbounded Gradients in Federated Leaning with Buffered Asynchronous Aggregation. (arXiv:2210.01161v1 [cs.LG])
Oct. 5, 2022, 1:11 a.m. | Mohammad Taha Toghani, César A. Uribe
cs.LG updates on arXiv.org arxiv.org
Synchronous updates may compromise the efficiency of cross-device federated
learning once the number of active clients increases. The \textit{FedBuff}
algorithm (Nguyen et al., 2022) alleviates this problem by allowing
asynchronous updates (staleness), which enhances the scalability of training
while preserving privacy via secure aggregation. We revisit the
\textit{FedBuff} algorithm for asynchronous federated learning and extend the
existing analysis by removing the boundedness assumptions from the gradient
norm. This paper presents a theoretical analysis of the convergence rate of
this algorithm …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Manager, Data Management & Insights Asia
@ Swiss Re | Bengaluru, KA, IN
Data Science Co-op
@ Authenticate | United States - Remote
Intern 2024 - Data Engineer, Smart MFG & AI
@ Micron Technology | Taoyuan - Fab 11, Taiwan
Data Engineer
@ Nine | Sydney, Australia