all AI news
Fast Vocabulary Transfer for Language Model Compression
Feb. 16, 2024, 5:43 a.m. | Leonidas Gee, Andrea Zugarini, Leonardo Rigutini, Paolo Torroni
cs.LG updates on arXiv.org arxiv.org
Abstract: Real-world business applications require a trade-off between language model performance and size. We propose a new method for model compression that relies on vocabulary transfer. We evaluate the method on various vertical domains and downstream tasks. Our results indicate that vocabulary transfer can be effectively used in combination with other compression techniques, yielding a significant reduction in model size and inference time while marginally compromising on performance.
abstract applications arxiv business business applications combination compression cs.ai cs.cl cs.lg domains language language model performance tasks trade trade-off transfer type world
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US