all AI news
[R] The attention mechanism of Transformers resembles a modern iteration of associative memory models from neuroscience. I show auto- & hetero-associative mixtures can perform a range of tasks + suggest new neuro-inspired Transformer interp approache
April 11, 2024, 8:30 p.m. | /u/tfburns
Machine Learning www.reddit.com
Question: What abilities does this permit?
Answer: A lot!
**Finite automata**
By assigning neural activities for image or text data and converting their combinations into auto-associating attractors (states) or hetero-associating quasi-attractors (transitions), we can simulate finite automata.
(see section 3.4 and appendix A12 of the paper linked below)
[ An example of mapping a finite automaton to a 'memory graph'. ](https://i.redd.it/gv1ob2eqswtc1.gif)
**Multi-scale graph representations** …
association attention auto iteration machinelearning memory modern neuro neuroscience question show tasks together transformer transformers
More from www.reddit.com / Machine Learning
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Scientist
@ Publicis Groupe | New York City, United States
Bigdata Cloud Developer - Spark - Assistant Manager
@ State Street | Hyderabad, India