all AI news
On the Convergence of mSGD and AdaGrad for Stochastic Optimization. (arXiv:2201.11204v1 [cs.LG])
Web: http://arxiv.org/abs/2201.11204
Jan. 28, 2022, 2:10 a.m. | Ruinan Jin, Yu Xing, Xingkang He
cs.LG updates on arXiv.org arxiv.org
As one of the most fundamental stochastic optimization algorithms, stochastic
gradient descent (SGD) has been intensively developed and extensively applied
in machine learning in the past decade. There have been some modified SGD-type
algorithms, which outperform the SGD in many competitions and applications in
terms of convergence rate and accuracy, such as momentum-based SGD (mSGD) and
adaptive gradient algorithm (AdaGrad). Despite these empirical successes, the
theoretical properties of these algorithms have not been well established due
to technical difficulties. With …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Director, Data Science (Advocacy & Nonprofit)
@ Civis Analytics | Remote
Data Engineer
@ Rappi | [CO] Bogotá
Data Scientist V, Marketplaces Personalization (Remote)
@ ID.me | United States (U.S.)
Product OPs Data Analyst (Flex/Remote)
@ Scaleway | Paris
Big Data Engineer
@ Risk Focus | Riga, Riga, Latvia
Internship Program: Machine Learning Backend
@ Nextail | Remote job