April 2, 2024, 7:44 p.m. | Jack Miller, Charles O'Neill, Thang Bui

cs.LG updates on arXiv.org arxiv.org

arXiv:2310.17247v2 Announce Type: replace
Abstract: In some settings neural networks exhibit a phenomenon known as \textit{grokking}, where they achieve perfect or near-perfect accuracy on the validation set long after the same performance has been achieved on the training set. In this paper, we discover that grokking is not limited to neural networks but occurs in other settings such as Gaussian process (GP) classification, GP regression, linear regression and Bayesian neural networks. We also uncover a mechanism by which to induce …

abstract accuracy arxiv beyond complexity cs.lg exploration near networks neural networks paper performance set stat.ml training type validation

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne