all AI news
PromptKD: Distilling Student-Friendly Knowledge for Generative Language Models via Prompt Tuning
Feb. 21, 2024, 5:43 a.m. | Gyeongman Kim, Doohyuk Jang, Eunho Yang
cs.LG updates on arXiv.org arxiv.org
Abstract: Recent advancements in large language models (LLMs) have raised concerns about inference costs, increasing the need for research into model compression. While knowledge distillation (KD) is a prominent method for this, research on KD for generative language models like LLMs is relatively sparse, and the approach of distilling student-friendly knowledge, which has shown promising performance in KD for classification models, remains unexplored in generative language models. To explore this approach, we propose PromptKD, a simple …
abstract arxiv compression concerns costs cs.ai cs.cl cs.lg distillation generative inference inference costs knowledge language language models large language large language models llms prompt prompt tuning research type via
More from arxiv.org / cs.LG updates on arXiv.org
Testing the Segment Anything Model on radiology data
1 day, 21 hours ago |
arxiv.org
Calorimeter shower superresolution
1 day, 21 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US