Feb. 26, 2024, 5:43 a.m. | Pengchao Han, Chao Huang, Geng Tian, Ming Tang, Xin Liu

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.15166v1 Announce Type: cross
Abstract: Split federated learning (SFL) is a recent distributed approach for collaborative model training among multiple clients. In SFL, a global model is typically split into two parts, where clients train one part in a parallel federated manner, and a main server trains the other. Despite the recent research on SFL algorithm development, the convergence analysis of SFL is missing in the literature, and this paper aims to fill this gap. The analysis of SFL can …

abstract analysis arxiv collaborative convergence cs.dc cs.lg data distributed federated learning global multiple part server train training trains type

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote