April 15, 2024, 4:42 a.m. | Xiaowen Jiang, Anton Rodomanov, Sebastian U. Stich

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.08447v1 Announce Type: new
Abstract: Federated learning is a distributed optimization paradigm that allows training machine learning models across decentralized devices while keeping the data localized. The standard method, FedAvg, suffers from client drift which can hamper performance and increase communication costs over centralized methods. Previous works proposed various strategies to mitigate drift, yet none have shown uniformly improved communication-computation trade-offs over vanilla gradient descent.
In this work, we revisit DANE, an established method in distributed optimization. We show that …

abstract arxiv client communication costs cs.lg data decentralized devices distributed drift federated learning machine machine learning machine learning models math.oc optimization paradigm performance standard strategies training type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Scientist, gTech Ads

@ Google | Mexico City, CDMX, Mexico

Lead, Data Analytics Operations

@ Zocdoc | Pune, Maharashtra, India