all AI news
[R] SpaceByte: Towards Deleting Tokenization from Large Language Modeling - Rice University 2024 - Practically the same performance as subword tokenizers without their many downsides!
April 24, 2024, 11:42 a.m. | /u/Singularian2501
Machine Learning www.reddit.com
Github: [https://github.com/kjslag/spacebyte](https://github.com/kjslag/spacebyte)
Abstract:
>Tokenization is widely used in large language models because it significantly improves performance. However, **tokenization imposes several disadvantages, such as performance biases, increased adversarial vulnerability, decreased character-level modeling performance, and increased modeling complexity.** To address these disadvantages without sacrificing performance, we propose SpaceByte, a novel **byte-level decoder architecture that closes the performance gap between byte-level and subword autoregressive language modeling.** SpaceByte consists of a byte-level Transformer model, but with extra larger transformer blocks inserted in …
More from www.reddit.com / Machine Learning
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Machine Learning Engineer
@ Apple | Sunnyvale, California, United States