April 12, 2024, 4:43 a.m. | Tanishq Kumar, Blake Bordelon, Samuel J. Gershman, Cengiz Pehlevan

cs.LG updates on arXiv.org arxiv.org

arXiv:2310.06110v3 Announce Type: replace-cross
Abstract: We propose that the grokking phenomenon, where the train loss of a neural network decreases much earlier than its test loss, can arise due to a neural network transitioning from lazy training dynamics to a rich, feature learning regime. To illustrate this mechanism, we study the simple setting of vanilla gradient descent on a polynomial regression problem with a two layer neural network which exhibits grokking without regularization in a way that cannot be explained …

abstract arxiv cond-mat.dis-nn cs.lg dynamics feature lazy loss network neural network stat.ml study test train training transition type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Data Engineer (m/f/d)

@ Project A Ventures | Berlin, Germany

Principle Research Scientist

@ Analog Devices | US, MA, Boston