all AI news
Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD for Communication Efficient Nonconvex Distributed Learning. (arXiv:2202.06083v3 [cs.LG] UPDATED)
Oct. 13, 2022, 1:13 a.m. | Tomoya Murata, Taiji Suzuki
cs.LG updates on arXiv.org arxiv.org
In recent centralized nonconvex distributed learning and federated learning,
local methods are one of the promising approaches to reduce communication time.
However, existing work has mainly focused on studying first-order optimality
guarantees. On the other side, second-order optimality guaranteed algorithms,
i.e., algorithms escaping saddle points, have been extensively studied in the
non-distributed optimization literature. In this paper, we study a new local
algorithm called Bias-Variance Reduced Local Perturbed SGD (BVR-L-PSGD), that
combines the existing bias-variance reduced gradient estimator with parameter …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Robotics Technician - 3rd Shift
@ GXO Logistics | Perris, CA, US, 92571