all AI news
Breaking the Memory Wall for Heterogeneous Federated Learning with Progressive Training
April 23, 2024, 4:42 a.m. | Yebo Wu, Li Li, Chunlin Tian, Chengzhong Xu
cs.LG updates on arXiv.org arxiv.org
Abstract: This paper presents ProFL, a novel progressive FL framework to effectively break the memory wall. Specifically, ProFL divides the model into different blocks based on its original architecture. Instead of updating the full model in each training round, ProFL first trains the front blocks and safely freezes them after convergence. Training of the next block is then triggered. This process iterates until the training of the whole model is completed. In this way, the memory …
abstract architecture arxiv breaking cs.dc cs.lg federated learning framework memory novel paper training trains type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Data Science Analyst
@ Mayo Clinic | AZ, United States
Sr. Data Scientist (Network Engineering)
@ SpaceX | Redmond, WA