all AI news
CES-KD: Curriculum-based Expert Selection for Guided Knowledge Distillation. (arXiv:2209.07606v1 [cs.CV])
Sept. 19, 2022, 1:11 a.m. | Ibtihel Amara, Maryam Ziaeefard, Brett H. Meyer, Warren Gross, James J. Clark
cs.LG updates on arXiv.org arxiv.org
Knowledge distillation (KD) is an effective tool for compressing deep
classification models for edge devices. However, the performance of KD is
affected by the large capacity gap between the teacher and student networks.
Recent methods have resorted to a multiple teacher assistant (TA) setting for
KD, which sequentially decreases the size of the teacher model to relatively
bridge the size gap between these models. This paper proposes a new technique
called Curriculum Expert Selection for Knowledge Distillation (CES-KD) to
efficiently …
More from arxiv.org / cs.LG updates on arXiv.org
Generalized Schr\"odinger Bridge Matching
1 day, 5 hours ago |
arxiv.org
Tight bounds on Pauli channel learning without entanglement
1 day, 5 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Analyst - Associate
@ JPMorgan Chase & Co. | Mumbai, Maharashtra, India
Staff Data Engineer (Data Platform)
@ Coupang | Seoul, South Korea
AI/ML Engineering Research Internship
@ Keysight Technologies | Santa Rosa, CA, United States
Sr. Director, Head of Data Management and Reporting Execution
@ Biogen | Cambridge, MA, United States
Manager, Marketing - Audience Intelligence (Senior Data Analyst)
@ Delivery Hero | Singapore, Singapore