June 23, 2022, 1:10 a.m. | Simran Kaur, Jeremy Cohen, Zachary C. Lipton

cs.LG updates on arXiv.org arxiv.org

The mechanisms by which certain training interventions, such as increasing
learning rates and applying batch normalization, improve the generalization of
deep networks remains a mystery. Prior works have speculated that "flatter"
solutions generalize better than "sharper" solutions to unseen data, motivating
several metrics for measuring flatness (particularly $\lambda_{max}$, the
largest eigenvalue of the Hessian of the loss); and algorithms, such as
Sharpness-Aware Minimization (SAM) [1], that directly optimize for flatness.
Other works question the link between $\lambda_{max}$ and generalization. In …

arxiv eigenvalue lg

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Robotics Technician - 3rd Shift

@ GXO Logistics | Perris, CA, US, 92571