all AI news
Retrospective Approximation for Smooth Stochastic Optimization. (arXiv:2103.04392v2 [math.OC] UPDATED)
June 3, 2022, 1:11 a.m. | David Newton, Raghu Bollapragada, Raghu Pasupathy, Nung Kwan Yip
stat.ML updates on arXiv.org arxiv.org
Stochastic Gradient (SG) is the defacto iterative technique to solve
stochastic optimization (SO) problems with a smooth (non-convex) objective $f$
and a stochastic first-order oracle. SG's attractiveness is due in part to its
simplicity of executing a single step along the negative subsampled gradient
direction to update the incumbent iterate. In this paper, we question SG's
choice of executing a single step as opposed to multiple steps between
subsample updates. Our investigation leads naturally to generalizing SG into
Retrospective Approximation …
approximation arxiv math optimization retrospective stochastic
More from arxiv.org / stat.ML updates on arXiv.org
Mixture of partially linear experts
22 hours ago |
arxiv.org
Adaptive deep learning for nonlinear time series models
1 day, 22 hours ago |
arxiv.org
A Full Adagrad algorithm with O(Nd) operations
1 day, 22 hours ago |
arxiv.org
Minimax Regret Learning for Data with Heterogeneous Subgroups
1 day, 22 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote