March 28, 2024, 4:41 a.m. | Natalie Lang, Alejandro Cohen, Nir Shlezinger

cs.LG updates on

arXiv:2403.18375v1 Announce Type: new
Abstract: Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning. It typically involves a set of heterogeneous devices locally training neural network (NN) models in parallel with periodic centralized aggregations. As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers. Conventional approaches discard incomplete intra-model updates done by stragglers, alter the amount of local workload and architecture, or resort to asynchronous settings; which …

abstract arxiv availability collaborative computational cs.lg devices edge eess.sp federated learning latency layer low network neural network paradigm popular resources set training type updates via wise

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Senior DevOps/MLOps

@ Global Relay | Vancouver, British Columbia, Canada

Senior Statistical Programmer for Clinical Development

@ Novo Nordisk | Aalborg, North Denmark Region, DK

Associate, Data Analysis

@ JLL | USA-CLIENT Boulder CO-Google

AI Compiler Engineer, Model Optimization, Quantization & Framework

@ Renesas Electronics | Duesseldorf, Germany

Lead AI Security Researcher

@ Grammarly | United States; Hybrid