May 15, 2024, 4:41 a.m. | Shreyan Ganguly, Roshan Nayak, Rakshith Rao, Ujan Deb, Prathosh AP

cs.LG updates on arXiv.org arxiv.org

arXiv:2405.08019v1 Announce Type: new
Abstract: Knowledge distillation, a widely used model compression technique, works on the basis of transferring knowledge from a cumbersome teacher model to a lightweight student model. The technique involves jointly optimizing the task specific and knowledge distillation losses with a weight assigned to them. Despite these weights playing a crucial role in the performance of the distillation process, current methods provide equal weight to both losses, leading to suboptimal performance. In this paper, we propose Adaptive …

abstract arxiv asr compression cs.ai cs.lg distillation dynamic knowledge loss losses them type

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

Sr. Data Operations

@ Carousell Group | West Jakarta, Indonesia

Senior Analyst, Business Intelligence & Reporting

@ Deutsche Bank | Bucharest

Business Intelligence Subject Matter Expert (SME) - Assistant Vice President

@ Deutsche Bank | Cary, 3000 CentreGreen Way

Enterprise Business Intelligence Specialist

@ NAIC | Kansas City

Senior Business Intelligence (BI) Developer - Associate

@ Deutsche Bank | Cary, 3000 CentreGreen Way