all AI news
In-Context Learning of a Linear Transformer Block: Benefits of the MLP Component and One-Step GD Initialization
Feb. 26, 2024, 5:42 a.m. | Ruiqi Zhang, Jingfeng Wu, Peter L. Bartlett
cs.LG updates on arXiv.org arxiv.org
Abstract: We study the \emph{in-context learning} (ICL) ability of a \emph{Linear Transformer Block} (LTB) that combines a linear attention component and a linear multi-layer perceptron (MLP) component. For ICL of linear regression with a Gaussian prior and a \emph{non-zero mean}, we show that LTB can achieve nearly Bayes optimal ICL risk. In contrast, using only linear attention must incur an irreducible additive approximation error. Furthermore, we establish a correspondence between LTB and one-step gradient descent estimators …
abstract arxiv attention benefits block context cs.cl cs.lg in-context learning layer linear linear regression mean mlp perceptron prior regression stat.ml study transformer type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Data Engineer (m/f/d)
@ Project A Ventures | Berlin, Germany
Principle Research Scientist
@ Analog Devices | US, MA, Boston