Feb. 20, 2024, 5:51 a.m. | Nicolas Boizard, Kevin El-Haddad, C\'eline Hudelot, Pierre Colombo

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.12030v1 Announce Type: new
Abstract: Deploying large language models (LLMs) of several billion parameters can be impractical in most industrial use cases due to constraints such as cost, latency limitations, and hardware accessibility. Knowledge distillation (KD) offers a solution by compressing knowledge from resource-intensive large models to smaller ones. Various strategies exist, some relying on the text generated by the teacher model and optionally utilizing his logits to enhance learning. However, these methods based on logits often require both teacher …

abstract accessibility arxiv billion cases constraints cost cs.cl distillation hardware industrial knowledge language language models large language large language models large models latency limitations llms loss parameters solution type use cases

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York