Feb. 21, 2024, 5:43 a.m. | Gyeongman Kim, Doohyuk Jang, Eunho Yang

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.12842v1 Announce Type: cross
Abstract: Recent advancements in large language models (LLMs) have raised concerns about inference costs, increasing the need for research into model compression. While knowledge distillation (KD) is a prominent method for this, research on KD for generative language models like LLMs is relatively sparse, and the approach of distilling student-friendly knowledge, which has shown promising performance in KD for classification models, remains unexplored in generative language models. To explore this approach, we propose PromptKD, a simple …

abstract arxiv compression concerns costs cs.ai cs.cl cs.lg distillation generative inference inference costs knowledge language language models large language large language models llms prompt prompt tuning research type via

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

Werkstudent Data Architecture & Governance (w/m/d)

@ E.ON | Essen, DE

Data Architect, Data Lake, Professional Services

@ Amazon.com | Bogota, DC, COL

Data Architect, Data Lake, Professional Services

@ Amazon.com | Buenos Aires City, Buenos Aires Autonomous City, ARG

Data Architect

@ Bitful | United States - Remote

GCP Data Architect (Presales)

@ Rackspace | India - Remote