April 11, 2024, 8:30 p.m. | /u/tfburns

Machine Learning www.reddit.com

Mechanically, attention appears to perform *hetero-association*. Actually, it can in principle *mix* *auto- & hetero-association* together.

Question: What abilities does this permit?
Answer: A lot!

**Finite automata**

By assigning neural activities for image or text data and converting their combinations into auto-associating attractors (states) or hetero-associating quasi-attractors (transitions), we can simulate finite automata.

(see section 3.4 and appendix A12 of the paper linked below)

[ An example of mapping a finite automaton to a 'memory graph'. ](https://i.redd.it/gv1ob2eqswtc1.gif)

**Multi-scale graph representations** …

association attention auto iteration machinelearning memory modern neuro neuroscience question show tasks together transformer transformers

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Scientist

@ Publicis Groupe | New York City, United States

Bigdata Cloud Developer - Spark - Assistant Manager

@ State Street | Hyderabad, India