April 4, 2024, 4:43 a.m. | Junxiong Wang, Tushaar Gangavarapu, Jing Nathan Yan, Alexander M. Rush

cs.LG updates on arXiv.org arxiv.org

arXiv:2401.13660v2 Announce Type: replace-cross
Abstract: Token-free language models learn directly from raw bytes and remove the inductive bias of subword tokenization. Operating on bytes, however, results in significantly longer sequences. In this setting, standard autoregressive Transformers scale poorly as the effective memory required grows with sequence length. The recent development of the Mamba state space model (SSM) offers an appealing alternative approach with a fixed-sized memory state and efficient decoding. We propose MambaByte, a token-free adaptation of the Mamba SSM …

abstract arxiv autoregressive transformers bias cs.cl cs.lg development free however inductive language language models learn mamba memory raw results scale space standard state state space model token tokenization transformers type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

DevOps Engineer (Data Team)

@ Reward Gateway | Sofia/Plovdiv