Sept. 2, 2022, 1:13 a.m. | Kazusato Oko, Shunta Akiyama, Tomoya Murata, Taiji Suzuki

stat.ML updates on arXiv.org arxiv.org

While variance reduction methods have shown great success in solving large
scale optimization problems, many of them suffer from accumulated errors and,
therefore, should periodically require the full gradient computation. In this
paper, we present a single-loop algorithm named SLEDGE (Single-Loop mEthoD for
Gradient Estimator) for finite-sum nonconvex optimization, which does not
require periodic refresh of the gradient estimator but achieves nearly optimal
gradient complexity. Unlike existing methods, SLEDGE has the advantage of
versatility; (i) second-order optimality, (ii) exponential convergence …

application arxiv federated learning gradient learning loop

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US