Sept. 5, 2022, 1:13 a.m. | Jiahui Yu, Konstantinos Spiliopoulos

stat.ML updates on arXiv.org arxiv.org

We study the effect of normalization on the layers of deep neural networks of
feed-forward type. A given layer $i$ with $N_{i}$ hidden units is allowed to be
normalized by $1/N_{i}^{\gamma_{i}}$ with $\gamma_{i}\in[1/2,1]$ and we study
the effect of the choice of the $\gamma_{i}$ on the statistical behavior of the
neural network's output (such as variance) as well as on the test accuracy on
the MNIST data set. We find that in terms of variance of the neural network's
output …

arxiv effects networks neural networks normalization

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Stagista Technical Data Engineer

@ Hager Group | BRESCIA, IT

Data Analytics - SAS, SQL - Associate

@ JPMorgan Chase & Co. | Mumbai, Maharashtra, India