Jan. 26, 2022, 2:11 a.m. | Yonglong Tian, Dilip Krishnan, Phillip Isola

cs.LG updates on arXiv.org arxiv.org

Often we wish to transfer representational knowledge from one neural network
to another. Examples include distilling a large network into a smaller one,
transferring knowledge from one sensory modality to a second, or ensembling a
collection of models into a single estimator. Knowledge distillation, the
standard approach to these problems, minimizes the KL divergence between the
probabilistic outputs of a teacher and student network. We demonstrate that
this objective ignores important structural knowledge of the teacher network.
This motivates an …

arxiv distillation

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Enterprise Data Quality, Senior Analyst

@ Toyota North America | Plano

Data Analyst & Audit Management Software (AMS) Coordinator

@ World Vision | Philippines - Home Working

Product Manager Power BI Platform Tech I&E Operational Insights

@ ING | HBP (Amsterdam - Haarlerbergpark)

Sr. Director, Software Engineering, Clinical Data Strategy

@ Moderna | USA-Washington-Seattle-1099 Stewart Street

Data Engineer (Data as a Service)

@ Xplor | Atlanta, GA, United States