April 24, 2024, 4:42 a.m. | Anej Svete, Ryan Cotterell

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.14994v1 Announce Type: cross
Abstract: Plenty of existing work has analyzed the abilities of the transformer architecture by describing its representational capacity with formal models of computation. However, the focus so far has been on analyzing the architecture in terms of language \emph{acceptance}. We contend that this is an ill-suited problem in the study of \emph{language models} (LMs), which are definitionally \emph{probability distributions} over strings. In this paper, we focus on the relationship between transformer LMs and $n$-gram LMs, a …

abstract architecture arxiv capacity computation cs.ai cs.cc cs.cl cs.fl cs.lg focus however language language models study terms transformer transformer architecture transformers type work

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Machine Learning Engineer

@ Apple | Sunnyvale, California, United States