all AI news
SparDL: Distributed Deep Learning Training with Efficient Sparse Communication
Feb. 26, 2024, 5:44 a.m. | Minjun Zhao, Yichen Yin, Yuren Mao, Qing Liu, Lu Chen, Yunjun Gao
cs.LG updates on arXiv.org arxiv.org
Abstract: Top-k sparsification has recently been widely used to reduce the communication volume in distributed deep learning. However, due to the Sparse Gradient Accumulation (SGA) dilemma, the performance of top-k sparsification still has limitations. Recently, a few methods have been put forward to handle the SGA dilemma. Regrettably, even the state-of-the-art method suffers from several drawbacks, e.g., it relies on an inefficient communication algorithm and requires extra transmission steps. Motivated by the limitations of existing methods, …
abstract arxiv communication cs.dc cs.lg deep learning deep learning training distributed gradient gradient accumulation limitations performance reduce training type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
AI Engineering Manager
@ M47 Labs | Barcelona, Catalunya [Cataluña], Spain