all AI news
Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD for Communication Efficient Nonconvex Distributed Learning. (arXiv:2202.06083v3 [cs.LG] UPDATED)
Oct. 13, 2022, 1:13 a.m. | Tomoya Murata, Taiji Suzuki
cs.LG updates on arXiv.org arxiv.org
In recent centralized nonconvex distributed learning and federated learning,
local methods are one of the promising approaches to reduce communication time.
However, existing work has mainly focused on studying first-order optimality
guarantees. On the other side, second-order optimality guaranteed algorithms,
i.e., algorithms escaping saddle points, have been extensively studied in the
non-distributed optimization literature. In this paper, we study a new local
algorithm called Bias-Variance Reduced Local Perturbed SGD (BVR-L-PSGD), that
combines the existing bias-variance reduced gradient estimator with parameter …
More from arxiv.org / cs.LG updates on arXiv.org
Trainwreck: A damaging adversarial attack on image classifiers
1 day, 12 hours ago |
arxiv.org
Fast Controllable Diffusion Models for Undersampled MRI Reconstruction
1 day, 12 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
Software Engineer III -Full Stack Developer - ModelOps, MLOps
@ JPMorgan Chase & Co. | NY, United States
Senior Lead Software Engineer - Full Stack Senior Developer - ModelOps, MLOps
@ JPMorgan Chase & Co. | NY, United States
Software Engineer III - Full Stack Developer - ModelOps, MLOps
@ JPMorgan Chase & Co. | NY, United States
Research Scientist (m/w/d) - Numerische Simulation Laser-Materie-Wechselwirkung
@ Fraunhofer-Gesellschaft | Freiburg, DE, 79104
Research Scientist, Speech Real-Time Dialog
@ Google | Mountain View, CA, USA