Web: http://arxiv.org/abs/2202.04557

June 20, 2022, 1:11 a.m. | Beren Millidge, Tommaso Salvatori, Yuhang Song, Thomas Lukasiewicz, Rafal Bogacz

cs.LG updates on arXiv.org arxiv.org

A large number of neural network models of associative memory have been
proposed in the literature. These include the classical Hopfield networks
(HNs), sparse distributed memories (SDMs), and more recently the modern
continuous Hopfield networks (MCHNs), which possesses close links with
self-attention in machine learning. In this paper, we propose a general
framework for understanding the operation of such memory networks as a sequence
of three operations: similarity, separation, and projection. We derive all
these memory models as instances of …

arxiv framework general memory models networks

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY