all AI news
iMixer: hierarchical Hopfield network implies an invertible, implicit and iterative MLP-Mixer
April 2, 2024, 7:44 p.m. | Toshihiro Ota, Masato Taki
cs.LG updates on arXiv.org arxiv.org
Abstract: In the last few years, the success of Transformers in computer vision has stimulated the discovery of many alternative models that compete with Transformers, such as the MLP-Mixer. Despite their weak inductive bias, these models have achieved performance comparable to well-studied convolutional neural networks. Recent studies on modern Hopfield networks suggest the correspondence between certain energy-based associative memory models and Transformers or MLP-Mixer, and shed some light on the theoretical background of the Transformer-type architectures …
abstract arxiv bias computer computer vision cond-mat.dis-nn convolutional neural networks cs.cv cs.lg cs.ne discovery hierarchical inductive iterative mlp network networks neural networks performance success transformers type vision
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Lead Data Scientist, Commercial Analytics
@ Checkout.com | London, United Kingdom
Data Engineer I
@ Love's Travel Stops | Oklahoma City, OK, US, 73120