all AI news
Optimal alphabet for single text compression. (arXiv:2201.05234v1 [cs.IT])
Jan. 17, 2022, 2:10 a.m. | Armen E. Allahverdyan, Andranik Khachatryan
cs.CL updates on arXiv.org arxiv.org
A text can be viewed via different representations, i.e. as a sequence of
letters, n-grams of letters, syllables, words, and phrases. Here we study the
optimal noiseless compression of texts using the Huffman code, where the
alphabet of encoding coincides with one of those representations. We show that
it is necessary to account for the codebook when compressing a single text.
Hence, the total compression comprises of the optimally compressed text --
characterized by the entropy of the alphabet elements …
More from arxiv.org / cs.CL updates on arXiv.org
VAL: Interactive Task Learning with GPT Dialog Parsing
1 day, 10 hours ago |
arxiv.org
DBCopilot: Scaling Natural Language Querying to Massive Databases
1 day, 10 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Staff Software Engineer, Generative AI, Google Cloud AI
@ Google | Mountain View, CA, USA; Sunnyvale, CA, USA
Expert Data Sciences
@ Gainwell Technologies | Any city, CO, US, 99999