all AI news
Stochastic Normalized Gradient Descent with Momentum for Large-Batch Training
April 16, 2024, 4:44 a.m. | Shen-Yi Zhao, Chang-Wei Shi, Yin-Peng Xie, Wu-Jun Li
cs.LG updates on arXiv.org arxiv.org
Abstract: Stochastic gradient descent~(SGD) and its variants have been the dominating optimization methods in machine learning. Compared to SGD with small-batch training, SGD with large-batch training can better utilize the computational power of current multi-core systems such as graphics processing units~(GPUs) and can reduce the number of communication rounds in distributed training settings. Thus, SGD with large-batch training has attracted considerable attention. However, existing empirical results showed that large-batch training typically leads to a drop in …
abstract arxiv computational core cs.lg current gpus gradient graphics graphics processing units machine machine learning optimization power processing reduce small stat.ml stochastic systems training type units variants
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Principal, Product Strategy Operations, Cloud Data Analytics
@ Google | Sunnyvale, CA, USA; Austin, TX, USA
Data Scientist - HR BU
@ ServiceNow | Hyderabad, India