Feb. 15, 2024, 5:41 a.m. | Jack Miller, Patrick Gleeson, Charles O'Neill, Thang Bui, Noam Levi

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.08946v1 Announce Type: new
Abstract: Neural networks sometimes exhibit grokking, a phenomenon where perfect or near-perfect performance is achieved on a validation set well after the same performance has been obtained on the corresponding training set. In this workshop paper, we introduce a robust technique for measuring grokking, based on fitting an appropriate functional form. We then use this to investigate the sharpness of transitions in training and validation accuracy under two settings. The first setting is the theoretical framework …

abstract arxiv cs.lg form functional measuring near networks neural networks paper performance robust set training type validation workshop

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne