April 30, 2024, 4:50 a.m. | Nikolay Bogoychev, Pinzhen Chen, Barry Haddow, Alexandra Birch

cs.CL updates on arXiv.org arxiv.org

arXiv:2311.09709v2 Announce Type: replace
Abstract: Deploying large language models (LLMs) encounters challenges due to intensive computational and memory requirements. Our research examines vocabulary trimming (VT) inspired by restricting embedding entries to the language of interest to bolster time and memory efficiency. While such modifications have been proven effective in tasks like machine translation, tailoring them to LLMs demands specific modifications given the diverse nature of LLM applications. We apply two language heuristics to trim the full vocabulary - Unicode-based script …

abstract arxiv challenges computational cs.cl efficiency embedding heuristics inference language language model language models large language large language model large language models llms memory requirements research type ups while

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US