Feb. 21, 2024, 5:43 a.m. | Elena Agliari, Francesco Alemanno, Miriam Aquaro, Alberto Fachechi

cs.LG updates on arXiv.org arxiv.org

arXiv:2308.01421v2 Announce Type: replace
Abstract: In this work we approach attractor neural networks from a machine learning perspective: we look for optimal network parameters by applying a gradient descent over a regularized loss function. Within this framework, the optimal neuron-interaction matrices turn out to be a class of matrices which correspond to Hebbian kernels revised by a reiterated unlearning protocol. Remarkably, the extent of such unlearning is proved to be related to the regularization hyperparameter of the loss function and …

abstract arxiv class cond-mat.dis-nn cs.lg dreaming early-stopping framework function gradient look loss machine machine learning network networks neural networks neuron overfitting parameters perspective regularization setup type work

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US