all AI news
Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD for Communication Efficient Nonconvex Distributed Learning. (arXiv:2202.06083v2 [cs.LG] UPDATED)
cs.LG updates on arXiv.org arxiv.org
In recent centralized nonconvex distributed learning and federated learning,
local methods are one of the promising approaches to reduce communication time.
However, existing work has mainly focused on studying first-order optimality
guarantees. On the other side, second-order optimality guaranteed algorithms,
i.e., algorithms escaping saddle points, have been extensively studied in the
non-distributed optimization literature. In this paper, we study a new local
algorithm called Bias-Variance Reduced Local Perturbed SGD (BVR-L-PSGD), that
combines the existing bias-variance reduced gradient estimator with parameter …
arxiv bias bias-variance communication distributed learning variance