Web: http://arxiv.org/abs/2205.01915

May 5, 2022, 1:10 a.m. | Han-Jia Ye, Su Lu, De-Chuan Zhan

cs.CV updates on arXiv.org arxiv.org

The knowledge of a well-trained deep neural network (a.k.a. the "teacher") is
valuable for learning similar tasks. Knowledge distillation extracts knowledge
from the teacher and integrates it with the target model (a.k.a. the
"student"), which expands the student's knowledge and improves its learning
efficacy. Instead of enforcing the teacher to work on the same task as the
student, we borrow the knowledge from a teacher trained from a general label
space -- in this "Generalized Knowledge Distillation (GKD)", the classes …

arxiv cv distillation knowledge relationship

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC