Jan. 7, 2022, 2:10 a.m. | Alethea Power, Yuri Burda, Harri Edwards, Igor Babuschkin, Vedant Misra

cs.LG updates on arXiv.org arxiv.org

In this paper we propose to study generalization of neural networks on small
algorithmically generated datasets. In this setting, questions about data
efficiency, memorization, generalization, and speed of learning can be studied
in great detail. In some situations we show that neural networks learn through
a process of "grokking" a pattern in the data, improving generalization
performance from random chance level to perfect generalization, and that this
improvement in generalization can happen well past the point of overfitting. We
also …

arxiv datasets overfitting small

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Machine Learning Engineer (m/f/d)

@ StepStone Group | Düsseldorf, Germany

2024 GDIA AI/ML Scientist - Supplemental

@ Ford Motor Company | United States