Web: http://arxiv.org/abs/2205.01445

May 4, 2022, 1:11 a.m. | Jimmy Ba, Murat A. Erdogdu, Taiji Suzuki, Zhichao Wang, Denny Wu, Greg Yang

cs.LG updates on arXiv.org arxiv.org

We study the first gradient descent step on the first-layer parameters
$\boldsymbol{W}$ in a two-layer neural network: $f(\boldsymbol{x}) =
where $\boldsymbol{W}\in\mathbb{R}^{d\times N},
\boldsymbol{a}\in\mathbb{R}^{N}$ are randomly initialized, and the training
objective is the empirical MSE loss: $\frac{1}{n}\sum_{i=1}^n
(f(\boldsymbol{x}_i)-y_i)^2$. In the proportional asymptotic limit where
$n,d,N\to\infty$ at the same rate, and an idealized student-teacher setting, we
show that the first gradient update contains a rank-1 "spike", which results in
an alignment between the first-layer weights and the linear component of the …

arxiv gradient learning ml representation

More from arxiv.org / cs.LG updates on arXiv.org

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC