all AI news
Regularization, early-stopping and dreaming: a Hopfield-like setup to address generalization and overfitting
Feb. 21, 2024, 5:43 a.m. | Elena Agliari, Francesco Alemanno, Miriam Aquaro, Alberto Fachechi
cs.LG updates on arXiv.org arxiv.org
Abstract: In this work we approach attractor neural networks from a machine learning perspective: we look for optimal network parameters by applying a gradient descent over a regularized loss function. Within this framework, the optimal neuron-interaction matrices turn out to be a class of matrices which correspond to Hebbian kernels revised by a reiterated unlearning protocol. Remarkably, the extent of such unlearning is proved to be related to the regularization hyperparameter of the loss function and …
abstract arxiv class cond-mat.dis-nn cs.lg dreaming early-stopping framework function gradient look loss machine machine learning network networks neural networks neuron overfitting parameters perspective regularization setup type work
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Data Science Analyst
@ Mayo Clinic | AZ, United States
Sr. Data Scientist (Network Engineering)
@ SpaceX | Redmond, WA