all AI news
[R] The attention mechanism of Transformers resembles a modern iteration of associative memory models from neuroscience. I show auto- & hetero-associative mixtures can perform a range of tasks + suggest new neuro-inspired Transformer interp approache
April 11, 2024, 8:30 p.m. | /u/tfburns
Machine Learning www.reddit.com
Question: What abilities does this permit?
Answer: A lot!
**Finite automata**
By assigning neural activities for image or text data and converting their combinations into auto-associating attractors (states) or hetero-associating quasi-attractors (transitions), we can simulate finite automata.
(see section 3.4 and appendix A12 of the paper linked below)
[ An example of mapping a finite automaton to a 'memory graph'. ](https://i.redd.it/gv1ob2eqswtc1.gif)
**Multi-scale graph representations** …
association attention auto iteration machinelearning memory modern neuro neuroscience question show tasks together transformer transformers
More from www.reddit.com / Machine Learning
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US