Oct. 13, 2022, 1:13 a.m. | Tomoya Murata, Taiji Suzuki

cs.LG updates on arXiv.org arxiv.org

In recent centralized nonconvex distributed learning and federated learning,
local methods are one of the promising approaches to reduce communication time.
However, existing work has mainly focused on studying first-order optimality
guarantees. On the other side, second-order optimality guaranteed algorithms,
i.e., algorithms escaping saddle points, have been extensively studied in the
non-distributed optimization literature. In this paper, we study a new local
algorithm called Bias-Variance Reduced Local Perturbed SGD (BVR-L-PSGD), that
combines the existing bias-variance reduced gradient estimator with parameter …

arxiv bias bias-variance communication distributed variance

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

Software Engineer III -Full Stack Developer - ModelOps, MLOps

@ JPMorgan Chase & Co. | NY, United States

Senior Lead Software Engineer - Full Stack Senior Developer - ModelOps, MLOps

@ JPMorgan Chase & Co. | NY, United States

Software Engineer III - Full Stack Developer - ModelOps, MLOps

@ JPMorgan Chase & Co. | NY, United States

Research Scientist (m/w/d) - Numerische Simulation Laser-Materie-Wechselwirkung

@ Fraunhofer-Gesellschaft | Freiburg, DE, 79104

Research Scientist, Speech Real-Time Dialog

@ Google | Mountain View, CA, USA