May 15, 2023, 12:43 a.m. | Zheng Chen, Martin Dahl, Erik G. Larsson

cs.LG updates on arXiv.org arxiv.org

In this work, we focus on the communication aspect of decentralized learning,
which involves multiple agents training a shared machine learning model using
decentralized stochastic gradient descent (D-SGD) over distributed data. In
particular, we investigate the impact of broadcast transmission and
probabilistic random access policy on the convergence performance of D-SGD,
considering the broadcast nature of wireless channels and the link dynamics in
the communication topology. Our results demonstrate that optimizing the access
probability to maximize the expected number of …

agents arxiv broadcast communication data decentralized distributed distributed data focus gradient impact machine machine learning machine learning model multiple networks policy random stochastic training wireless work

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote