April 15, 2024, 4:42 a.m. | Nived Rajaraman, Jiantao Jiao, Kannan Ramchandran

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.08335v1 Announce Type: cross
Abstract: While there has been a large body of research attempting to circumvent tokenization for language modeling (Clark et al., 2022; Xue et al., 2022), the current consensus is that it is a necessary initial step for designing state-of-the-art performant language models. In this paper, we investigate tokenization from a theoretical point of view by studying the behavior of transformers on simple data generating processes. When trained on data drawn from certain simple $k^{\text{th}}$-order Markov processes …

abstract art arxiv consensus cs.cl cs.lg current designing language language models llms modeling paper research state theory tokenization type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne