Feb. 21, 2024, 5:43 a.m. | Gyeongman Kim, Doohyuk Jang, Eunho Yang

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.12842v1 Announce Type: cross
Abstract: Recent advancements in large language models (LLMs) have raised concerns about inference costs, increasing the need for research into model compression. While knowledge distillation (KD) is a prominent method for this, research on KD for generative language models like LLMs is relatively sparse, and the approach of distilling student-friendly knowledge, which has shown promising performance in KD for classification models, remains unexplored in generative language models. To explore this approach, we propose PromptKD, a simple …

abstract arxiv compression concerns costs cs.ai cs.cl cs.lg distillation generative inference inference costs knowledge language language models large language large language models llms prompt prompt tuning research type via

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US