all AI news
Transformers Provably Learn Sparse Token Selection While Fully-Connected Nets Cannot
June 12, 2024, 4:47 a.m. | Zixuan Wang, Stanley Wei, Daniel Hsu, Jason D. Lee
cs.LG updates on arXiv.org arxiv.org
Abstract: The transformer architecture has prevailed in various deep learning settings due to its exceptional capabilities to select and compose structural information. Motivated by these capabilities, Sanford et al. proposed the sparse token selection task, in which transformers excel while fully-connected networks (FCNs) fail in the worst case. Building upon that, we strengthen the FCN lower bound to an average-case setting and establish an algorithmic separation of transformers over FCNs. Specifically, a one-layer transformer trained with …
abstract architecture arxiv capabilities compose cs.it cs.lg deep learning excel fail information learn math.it networks stat.ml token transformer transformer architecture transformers type while
More from arxiv.org / cs.LG updates on arXiv.org
MixerFlow: MLP-Mixer meets Normalising Flows
23 minutes ago |
arxiv.org
Kernelised Normalising Flows
23 minutes ago |
arxiv.org
Jobs in AI, ML, Big Data
AI Focused Biochemistry Postdoctoral Fellow
@ Lawrence Berkeley National Lab | Berkeley, CA
Senior Data Engineer
@ Displate | Warsaw
Solutions Architect
@ PwC | Bucharest - 1A Poligrafiei Boulevard
Research Fellow (Social and Cognition Factors, CLIC)
@ Nanyang Technological University | NTU Main Campus, Singapore
Research Aide - Research Aide I - Department of Psychology
@ Cornell University | Ithaca (Main Campus)
Technical Architect - SMB/Desk
@ Salesforce | Ireland - Dublin