all AI news
FedSPU: Personalized Federated Learning for Resource-constrained Devices with Stochastic Parameter Update
March 19, 2024, 4:42 a.m. | Ziru Niu, Hai Dong, A. K. Qin
cs.LG updates on arXiv.org arxiv.org
Abstract: Personalized Federated Learning (PFL) is widely employed in IoT applications to handle high-volume, non-iid client data while ensuring data privacy. However, heterogeneous edge devices owned by clients may impose varying degrees of resource constraints, causing computation and communication bottlenecks for PFL. Federated Dropout has emerged as a popular strategy to address this challenge, wherein only a subset of the global model, i.e. a \textit{sub-model}, is trained on a client's device, thereby reducing computation and communication …
abstract applications arxiv bottlenecks client communication computation constraints cs.lg data data privacy devices dropout edge edge devices federated learning however iot personalized privacy stochastic type update
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Sr. VBI Developer II
@ Atos | Texas, US, 75093
Wealth Management - Data Analytics Intern/Co-op Fall 2024
@ Scotiabank | Toronto, ON, CA