June 16, 2022, 1:10 a.m. | Kaifeng Lyu, Zhiyuan Li, Sanjeev Arora

cs.LG updates on arXiv.org arxiv.org

Normalization layers (e.g., Batch Normalization, Layer Normalization) were
introduced to help with optimization difficulties in very deep nets, but they
clearly also help generalization, even in not-so-deep nets. Motivated by the
long-held belief that flatter minima lead to better generalization, this paper
gives mathematical analysis and supporting experiments suggesting that
normalization (together with accompanying weight-decay) encourages GD to reduce
the sharpness of loss surface. Here "sharpness" is carefully defined given that
the loss is scale-invariant, a known consequence of normalization. …

arxiv lg normalization understanding

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Intelligence Analyst

@ Rappi | COL-Bogotá

Applied Scientist II

@ Microsoft | Redmond, Washington, United States