Oct. 14, 2022, 1:12 a.m. | Diyuan Wu, Vyacheslav Kungurtsev, Marco Mondelli

cs.LG updates on arXiv.org arxiv.org

The stochastic heavy ball method (SHB), also known as stochastic gradient
descent (SGD) with Polyak's momentum, is widely used in training neural
networks. However, despite the remarkable success of such algorithm in
practice, its theoretical characterization remains limited. In this paper, we
focus on neural networks with two and three layers and provide a rigorous
understanding of the properties of the solutions found by SHB: \emph{(i)}
stability after dropping out part of the neurons, \emph{(ii)} connectivity
along a low-loss path, …

analysis arxiv connectivity convergence dropout global mean

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote