Feb. 22, 2024, 5:43 a.m. | Vivien Cabannes, Elvis Dohmatob, Alberto Bietti

cs.LG updates on arXiv.org arxiv.org

arXiv:2310.02984v2 Announce Type: replace-cross
Abstract: Learning arguably involves the discovery and memorization of abstract rules. The aim of this paper is to study associative memory mechanisms. Our model is based on high-dimensional matrices consisting of outer products of embeddings, which relates to the inner layers of transformer language models. We derive precise scaling laws with respect to sample size and parameter size, and discuss the statistical efficiency of different estimators, including optimization-based algorithms. We provide extensive numerical experiments to validate …

abstract aim arxiv cs.ai cs.cl cs.lg cs.ne discovery embeddings language language models laws memories memory paper products rules scaling stat.ml study transformer transformer language models type

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US