all AI news
TokenMonster Ungreedy Subword Tokenizer V4: Enables Models to be 4x Smaller and Whilst Achieving Higher Chr/Token (With Evidence) [P]
July 13, 2023, 6:02 p.m. | /u/Pan000
Machine Learning www.reddit.com
TokenMonster is an ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript. You can use one of my pretrained vocabularies or generate your own with the included tools.
TokenMonster can tokenize text more efficiently than other tokenization methods, even when using a much smaller vocabulary. Here is a size 24000 TokenMonster vocabulary benchmarked against tiktoken cl100k\_base (100256) and LLaMa (32000) ([link to interactive benchmark](https://bot.co/tokenmonster/benchmark.html?a=tiktoken%20cl100k_base&b=llama%20tokenmonster&c=englishcode-24000-unfiltered-v1)):
https://preview.redd.it/o16a9tbrurbb1.png?width=1506&format=png&auto=webp&s=66c11d2b8defd634c86756064125b70e8e5cb6d6
Unlike previously versions of TokenMonster, the …
javascript machinelearning python text token tokens tools trainer words
More from www.reddit.com / Machine Learning
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Senior Applied Data Scientist
@ dunnhumby | London
Principal Data Architect - Azure & Big Data
@ MGM Resorts International | Home Office - US, NV