all AI news
Communication-Efficient Federated Learning with Adaptive Compression under Dynamic Bandwidth
May 7, 2024, 4:42 a.m. | Ying Zhuansun, Dandan Li, Xiaohong Huang, Caijun Sun
cs.LG updates on arXiv.org arxiv.org
Abstract: Federated learning can train models without directly providing local data to the server. However, the frequent updating of the local model brings the problem of large communication overhead. Recently, scholars have achieved the communication efficiency of federated learning mainly by model compression. But they ignore two problems: 1) network state of each client changes dynamically; 2) network state among clients is not the same. The clients with poor bandwidth update local model slowly, which leads …
abstract arxiv bandwidth communication compression cs.ai cs.lg data dynamic efficiency federated learning however scholars server train type
More from arxiv.org / cs.LG updates on arXiv.org
Testing the Segment Anything Model on radiology data
2 days, 6 hours ago |
arxiv.org
Calorimeter shower superresolution
2 days, 6 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US