Feb. 26, 2024, 5:43 a.m. | Kento Imaizumi, Hideaki Iiduka

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.15344v1 Announce Type: cross
Abstract: The performance of stochastic gradient descent (SGD), which is the simplest first-order optimizer for training deep neural networks, depends on not only the learning rate but also the batch size. They both affect the number of iterations and the stochastic first-order oracle (SFO) complexity needed for training. In particular, the previous numerical results indicated that, for SGD using a constant learning rate, the number of iterations needed for training decreases when the batch size increases, …

abstract arxiv complexities cs.lg gradient iteration networks neural networks oracle performance rate stat.ml stochastic training type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Codec Avatars Research Engineer

@ Meta | Pittsburgh, PA