June 25, 2024, 4:42 a.m. | Sho Takase, Ryokan Ri, Shun Kiyono, Takuya Kato

cs.CL updates on arXiv.org arxiv.org

arXiv:2406.16508v1 Announce Type: new
Abstract: This paper empirically investigates the relationship between subword vocabulary size and the performance of large language models (LLMs) to provide insights on how to define the vocabulary size. Experimental results show that larger vocabulary sizes lead to better performance in LLMs. Moreover, we consider a continual training scenario where a pre-trained language model is trained on a different target language. We introduce a simple method to use a new vocabulary instead of the pre-defined one. …

abstract arxiv continual cs.cl experimental insights language language models large language large language models llms paper performance relationship results show training type

Senior Systems Engineer - RF/Electrical Focus

@ RTX | AZ805: RMS AP Bldg 805 1151 East Hermans Road Building 805, Tucson, AZ, 85756 USA

Model-Based Systems Engineer, Mid

@ Booz Allen Hamilton | USA, MD, Lexington Park (46950 Bradley Blvd)

Electromagnetic Warfare Hardware Engineering Lead

@ Booz Allen Hamilton | USA, OH, Beavercreek (3800 Pentagon Blvd)

Senior Software Focused Systems Engineer

@ RTX | AZ805: RMS AP Bldg 805 1151 East Hermans Road Building 805, Tucson, AZ, 85756 USA

Senior Principal Low Observable Design, Analysis, & Test Engineer - Tucson, AZ (Onsite)

@ RTX | AZ827: RMS AP Bldg 827C 1151 East Hermans Road Building 827C, Tucson, AZ, 85756 USA

Senior Principal Low Observable Materials Engineer - Onsite

@ RTX | AZ827: RMS AP Bldg 827C 1151 East Hermans Road Building 827C, Tucson, AZ, 85756 USA