Web: http://arxiv.org/abs/2102.04297

May 12, 2022, 1:10 a.m. | Xingyu Wang, Sewoong Oh, Chang-Han Rhee

stat.ML updates on arXiv.org arxiv.org

The empirical success of deep learning is often attributed to SGD's
mysterious ability to avoid sharp local minima in the loss landscape, as sharp
minima are known to lead to poor generalization. Recently, empirical evidence
of heavy-tailed gradient noise was reported in many deep learning tasks, and it
was shown in \c{S}im\c{s}ekli (2019a,b) that SGD can escape sharp local minima
under the presence of such heavy-tailed gradient noise, providing a partial
solution to the mystery. In this work, we analyze …

arxiv noise

More from arxiv.org / stat.ML updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California