April 24, 2024, 4:42 a.m. | Anej Svete, Ryan Cotterell

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.14994v1 Announce Type: cross
Abstract: Plenty of existing work has analyzed the abilities of the transformer architecture by describing its representational capacity with formal models of computation. However, the focus so far has been on analyzing the architecture in terms of language \emph{acceptance}. We contend that this is an ill-suited problem in the study of \emph{language models} (LMs), which are definitionally \emph{probability distributions} over strings. In this paper, we focus on the relationship between transformer LMs and $n$-gram LMs, a …

abstract architecture arxiv capacity computation cs.ai cs.cc cs.cl cs.fl cs.lg focus however language language models study terms transformer transformer architecture transformers type work

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York