all AI news
TAMUNA: Doubly Accelerated Distributed Optimization with Local Training, Compression, and Partial Participation
April 30, 2024, 4:44 a.m. | Laurent Condat, Ivan Agarsk\'y, Grigory Malinovsky, Peter Richt\'arik
cs.LG updates on arXiv.org arxiv.org
Abstract: In distributed optimization and learning, several machines alternate between local computations in parallel and communication with a distant server. Communication is usually slow and costly and forms the main bottleneck. This is particularly true in federated learning, where a large number of users collaborate toward a global training task. In addition, it is desirable for a robust algorithm to allow for partial participation, since it is often the case that some clients are not able …
abstract arxiv communication compression cs.lg distributed federated learning forms machines math.oc optimization server training true type
More from arxiv.org / cs.LG updates on arXiv.org
Efficient Data-Driven MPC for Demand Response of Commercial Buildings
2 days, 22 hours ago |
arxiv.org
Testing the Segment Anything Model on radiology data
2 days, 22 hours ago |
arxiv.org
Calorimeter shower superresolution
2 days, 22 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US