all AI news
FedLion: Faster Adaptive Federated Optimization with Fewer Communication
Feb. 16, 2024, 5:42 a.m. | Zhiwei Tang, Tsung-Hui Chang
cs.LG updates on arXiv.org arxiv.org
Abstract: In Federated Learning (FL), a framework to train machine learning models across distributed data, well-known algorithms like FedAvg tend to have slow convergence rates, resulting in high communication costs during training. To address this challenge, we introduce FedLion, an adaptive federated optimization algorithm that seamlessly incorporates key elements from the recently proposed centralized adaptive algorithm, Lion (Chen et al. 2o23), into the FL framework. Through comprehensive evaluations on two widely adopted FL benchmarks, we demonstrate …
abstract algorithm algorithms arxiv challenge communication convergence costs cs.ai cs.lg data distributed distributed data faster federated learning framework key machine machine learning machine learning models optimization stat.ml train training type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Machine Learning Engineer
@ Samsara | Canada - Remote