April 30, 2024, 4:43 a.m. | Nishant Luitel, Nirajan Bekoju, Anand Kumar Sah, Subarna Shakya

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.18071v1 Announce Type: cross
Abstract: Recent language models use subwording mechanisms to handle Out-of-Vocabulary(OOV) words seen during test time and, their generation capacity is generally measured using perplexity, an intrinsic metric. It is known that increasing the subword granularity results in a decrease of perplexity value. However, the study of how subwording affects the understanding capacity of language models has been very few and only limited to a handful of languages. To reduce this gap we used 6 different tokenization …

abstract arxiv capacity cs.cl cs.lg effects fine-tuning intrinsic investigation language language models performance perplexity results test tokenization type words

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US