all AI news
Grokking as the Transition from Lazy to Rich Training Dynamics
April 12, 2024, 4:43 a.m. | Tanishq Kumar, Blake Bordelon, Samuel J. Gershman, Cengiz Pehlevan
cs.LG updates on arXiv.org arxiv.org
Abstract: We propose that the grokking phenomenon, where the train loss of a neural network decreases much earlier than its test loss, can arise due to a neural network transitioning from lazy training dynamics to a rich, feature learning regime. To illustrate this mechanism, we study the simple setting of vanilla gradient descent on a polynomial regression problem with a two layer neural network which exhibits grokking without regularization in a way that cannot be explained …
abstract arxiv cond-mat.dis-nn cs.lg dynamics feature lazy loss network neural network stat.ml study test train training transition type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Data Engineer (m/f/d)
@ Project A Ventures | Berlin, Germany
Principle Research Scientist
@ Analog Devices | US, MA, Boston