all AI news
KoReA-SFL: Knowledge Replay-based Split Federated Learning Against Catastrophic Forgetting
April 22, 2024, 4:42 a.m. | Zeke Xia, Ming Hu, Dengke Yan, Ruixuan Liu, Anran Li, Xiaofei Xie, Mingsong Chen
cs.LG updates on arXiv.org arxiv.org
Abstract: Although Split Federated Learning (SFL) is good at enabling knowledge sharing among resource-constrained clients, it suffers from the problem of low training accuracy due to the neglect of data heterogeneity and catastrophic forgetting. To address this issue, we propose a novel SFL approach named KoReA-SFL, which adopts a multi-model aggregation mechanism to alleviate gradient divergence caused by heterogeneous data and a knowledge replay strategy to deal with catastrophic forgetting. Specifically, in KoReA-SFL cloud servers (i.e., …
abstract accuracy arxiv catastrophic forgetting cs.lg data enabling federated learning good issue knowledge korea low novel split training type
More from arxiv.org / cs.LG updates on arXiv.org
Training robust and generalizable quantum models
50 minutes ago |
arxiv.org
Causal Discovery Under Local Privacy
50 minutes ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Consultant Senior Power BI & Azure - CDI - H/F
@ Talan | Lyon, France