June 24, 2022, 1:12 a.m. | Hong-You Chen, Cheng-Hao Tu, Ziwei Li, Han-Wei Shen, Wei-Lun Chao

cs.CV updates on arXiv.org arxiv.org

In most of the literature on federated learning (FL), neural networks are
initialized with random weights. In this paper, we present an empirical study
on the effect of pre-training on FL. Specifically, we aim to investigate if
pre-training can alleviate the drastic accuracy drop when clients'
decentralized data are non-IID. We focus on FedAvg, the fundamental and most
widely used FL algorithm. We found that pre-training does largely close the gap
between FedAvg and centralized learning under non-IID data, but …

arxiv federated learning learning lg pre-training training

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineer, Data Tools - Full Stack

@ DoorDash | Pune, India

Senior Data Analyst

@ Artsy | New York City