all AI news
Implicit Bias of MSE Gradient Optimization in Underparameterized Neural Networks. (arXiv:2201.04738v1 [stat.ML])
Jan. 14, 2022, 2:10 a.m. | Benjamin Bowman, Guido Montufar
cs.LG updates on arXiv.org arxiv.org
We study the dynamics of a neural network in function space when optimizing
the mean squared error via gradient flow. We show that in the
underparameterized regime the network learns eigenfunctions of an integral
operator $T_{K^\infty}$ determined by the Neural Tangent Kernel (NTK) at rates
corresponding to their eigenvalues. For example, for uniformly distributed data
on the sphere $S^{d - 1}$ and rotation invariant weight distributions, the
eigenfunctions of $T_{K^\infty}$ are the spherical harmonics. Our results can
be understood as …
arxiv bias gradient ml networks neural networks optimization
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Business Intelligence Developer / Analyst
@ Transamerica | Work From Home, USA
Data Analyst (All Levels)
@ Noblis | Bethesda, MD, United States