all AI news
Fundamental Limits of Communication Efficiency for Model Aggregation in Distributed Learning: A Rate-Distortion Approach. (arXiv:2206.13984v1 [cs.IT])
stat.ML updates on arXiv.org arxiv.org
One of the main focuses in distributed learning is communication efficiency,
since model aggregation at each round of training can consist of millions to
billions of parameters. Several model compression methods, such as gradient
quantization and sparsification, have been proposed to improve the
communication efficiency of model aggregation. However, the
information-theoretic minimum communication cost for a given distortion of
gradient estimators is still unknown. In this paper, we study the fundamental
limit of communication cost of model aggregation in distributed …
aggregation arxiv communication distributed efficiency learning rate