May 23, 2022, 1:11 a.m. | Nicole Mitchell, Johannes Ballé, Zachary Charles, Jakub Konečný

stat.ML updates on arXiv.org arxiv.org

A significant bottleneck in federated learning (FL) is the network
communication cost of sending model updates from client devices to the central
server. We present a comprehensive empirical study of the statistics of model
updates in FL, as well as the role and benefits of various compression
techniques. Motivated by these observations, we propose a novel method to
reduce the average communication cost, which is near-optimal in many use cases,
and outperforms Top-K, DRIVE, 3LC and QSGD on Stack Overflow …

accuracy arxiv communication federated learning learning rate theory trade

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Technology Consultant Master Data Management (w/m/d)

@ SAP | Walldorf, DE, 69190

Research Engineer, Computer Vision, Google Research

@ Google | Nairobi, Kenya