all AI news
Bayesian Federated Model Compression for Communication and Computation Efficiency
April 12, 2024, 4:41 a.m. | Chengyu Xia, Danny H. K. Tsang, Vincent K. N. Lau
cs.LG updates on arXiv.org arxiv.org
Abstract: In this paper, we investigate Bayesian model compression in federated learning (FL) to construct sparse models that can achieve both communication and computation efficiencies. We propose a decentralized Turbo variational Bayesian inference (D-Turbo-VBI) FL framework where we firstly propose a hierarchical sparse prior to promote a clustered sparse structure in the weight matrix. Then, by carefully integrating message passing and VBI with a decentralized turbo framework, we propose the D-Turbo-VBI algorithm which can (i) reduce …
abstract arxiv bayesian bayesian inference communication compression computation construct cs.ai cs.dc cs.lg decentralized efficiency federated learning framework hierarchical inference paper prior promote turbo type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Scientist
@ Publicis Groupe | New York City, United States
Bigdata Cloud Developer - Spark - Assistant Manager
@ State Street | Hyderabad, India