all AI news
FedMef: Towards Memory-efficient Federated Dynamic Pruning
March 25, 2024, 4:41 a.m. | Hong Huang, Weiming Zhuang, Chen Chen, Lingjuan Lyu
cs.LG updates on arXiv.org arxiv.org
Abstract: Federated learning (FL) promotes decentralized training while prioritizing data confidentiality. However, its application on resource-constrained devices is challenging due to the high demand for computation and memory resources to train deep learning models. Neural network pruning techniques, such as dynamic pruning, could enhance model efficiency, but directly adopting them in FL still poses substantial challenges, including post-pruning performance degradation, high activation memory usage, etc. To address these challenges, we propose FedMef, a novel and memory-efficient …
abstract application arxiv computation cs.dc cs.lg data decentralized deep learning demand devices dynamic efficiency federated learning however memory network neural network pruning resources them train training type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Intern Large Language Models Planning (f/m/x)
@ BMW Group | Munich, DE
Data Engineer Analytics
@ Meta | Menlo Park, CA | Remote, US