May 16, 2022, 1:11 a.m. | Jonghwan Park, Dohyeok Kwon, Songnam hong

cs.LG updates on arXiv.org arxiv.org

Online federated learning (OFL) is a promising framework to collaboratively
learn a sequence of non-linear functions (or models) from distributed streaming
data incoming to multiple clients while keeping the privacy of their local
data. In this framework, we first construct a vanilla method (named OFedAvg) by
incorporating online gradient descent (OGD) into the de facto aggregation
method (named FedAvg). Despite its optimal asymptotic performance, OFedAvg
suffers from heavy communication overhead and long learning delay. To tackle
these shortcomings, we propose …

arxiv communication federated learning intermittent learning quantization

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineer, Machine Learning (Tel Aviv)

@ Meta | Tel Aviv, Israel

Senior Data Scientist- Digital Government

@ Oracle | CASABLANCA, Morocco