April 15, 2024, 4:43 a.m. | Wenhao Yuan, Xuehe Wang

cs.LG updates on arXiv.org arxiv.org

arXiv:2303.15799v4 Announce Type: replace
Abstract: Federated Learning (FL) has emerged as a pivotal paradigm within distributed model training, facilitating collaboration among multiple devices to refine a shared model, harnessing their respective datasets as orchestrated by a central server, while ensuring the localization of private data. Nonetheless, the non-independent-and-identically-distributed (Non-IID) data generated on heterogeneous clients and the incessant information exchange among participants may markedly impede training efficacy and retard the convergence rate. In this paper, we refine the conventional stochastic gradient …

abstract arxiv collaboration cs.dc cs.lg data datasets devices distributed federated learning generated independent localization multiple paradigm pivotal private data refine server training type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

AI Engineering Manager

@ M47 Labs | Barcelona, Catalunya [Cataluña], Spain