Web: http://arxiv.org/abs/2206.10654

June 23, 2022, 1:12 a.m. | Simran Kaur, Jeremy Cohen, Zachary C. Lipton

stat.ML updates on arXiv.org arxiv.org

The mechanisms by which certain training interventions, such as increasing
learning rates and applying batch normalization, improve the generalization of
deep networks remains a mystery. Prior works have speculated that "flatter"
solutions generalize better than "sharper" solutions to unseen data, motivating
several metrics for measuring flatness (particularly $\lambda_{max}$, the
largest eigenvalue of the Hessian of the loss); and algorithms, such as
Sharpness-Aware Minimization (SAM) [1], that directly optimize for flatness.
Other works question the link between $\lambda_{max}$ and generalization. In …

arxiv eigenvalue lg on

More from arxiv.org / stat.ML updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY