Feb. 21, 2024, 5:43 a.m. | Elena Agliari, Francesco Alemanno, Miriam Aquaro, Alberto Fachechi

cs.LG updates on arXiv.org arxiv.org

arXiv:2308.01421v2 Announce Type: replace
Abstract: In this work we approach attractor neural networks from a machine learning perspective: we look for optimal network parameters by applying a gradient descent over a regularized loss function. Within this framework, the optimal neuron-interaction matrices turn out to be a class of matrices which correspond to Hebbian kernels revised by a reiterated unlearning protocol. Remarkably, the extent of such unlearning is proved to be related to the regularization hyperparameter of the loss function and …

abstract arxiv class cond-mat.dis-nn cs.lg dreaming early-stopping framework function gradient look loss machine machine learning network networks neural networks neuron overfitting parameters perspective regularization setup type work

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Science Analyst

@ Mayo Clinic | AZ, United States

Sr. Data Scientist (Network Engineering)

@ SpaceX | Redmond, WA