all AI news
Resource-Efficient and Delay-Aware Federated Learning Design under Edge Heterogeneity. (arXiv:2112.13926v2 [cs.NI] UPDATED)
Jan. 4, 2022, 2:10 a.m. | David Nickel, Frank Po-Chen Lin, Seyyedali Hosseinalipour, Nicolo Michelusi, Christopher G. Brinton
cs.LG updates on arXiv.org arxiv.org
Federated learning (FL) has emerged as a popular methodology for distributing
machine learning across wireless edge devices. In this work, we consider
optimizing the tradeoff between model performance and resource utilization in
FL, under device-server communication delays and device computation
heterogeneity. Our proposed StoFedDelAv algorithm incorporates a local-global
model combiner into the FL synchronization step. We theoretically characterize
the convergence behavior of StoFedDelAv and obtain the optimal combiner
weights, which consider the global model delay and expected local gradient
error …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
(373) Applications Manager – Business Intelligence - BSTD
@ South African Reserve Bank | South Africa
Data Engineer Talend (confirmé/sénior) - H/F - CDI
@ Talan | Paris, France
Data Science Intern (Summer) / Stagiaire en données (été)
@ BetterSleep | Montreal, Quebec, Canada
Director - Master Data Management (REMOTE)
@ Wesco | Pittsburgh, PA, United States
Architect Systems BigData REF2649A
@ Deutsche Telekom IT Solutions | Budapest, Hungary
Data Product Coordinator
@ Nestlé | São Paulo, São Paulo, BR, 04730-000