Feb. 5, 2024, 3:43 p.m. | Juno Kim Taiji Suzuki

cs.LG updates on arXiv.org arxiv.org

Large language models based on the Transformer architecture have demonstrated impressive capabilities to learn in context. However, existing theoretical studies on how this phenomenon arises are limited to the dynamics of a single layer of attention trained on linear regression tasks. In this paper, we study the optimization of a Transformer consisting of a fully connected layer followed by a linear attention layer. The MLP acts as a common nonlinear representation or feature map, greatly enhancing the power of in-context …

architecture attention capabilities context cs.lg dynamics features landscape language language models large language large language models layer learn linear linear regression mean paper regression stat.ml studies study tasks transformer transformer architecture transformers

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead Data Engineer

@ WorkMoney | New York City, United States - Remote