all AI news
It's All in the Head: Representation Knowledge Distillation through Classifier Sharing. (arXiv:2201.06945v2 [cs.CV] UPDATED)
April 6, 2022, 1:11 a.m. | Emanuel Ben-Baruch, Matan Karklinsky, Yossi Biton, Avi Ben-Cohen, Hussam Lawen, Nadav Zamir
cs.CV updates on arXiv.org arxiv.org
Representation knowledge distillation aims at transferring rich information
from one model to another. Common approaches for representation distillation
mainly focus on the direct minimization of distance metrics between the models'
embedding vectors. Such direct methods may be limited in transferring
high-order dependencies embedded in the representation vectors, or in handling
the capacity gap between the teacher and student models. Moreover, in standard
knowledge distillation, the teacher is trained without awareness of the
student's characteristics and capacity. In this paper, we …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Analyst
@ Aviva | UK - Norwich - Carrara - 1st Floor
Werkstudent im Bereich Performance Engineering mit Computer Vision (w/m/div.) - anteilig remote
@ Bosch Group | Stuttgart, Lollar, Germany
Applied Research Scientist - NLP (Senior)
@ Snorkel AI | Hybrid / San Francisco, CA
Associate Principal Engineer, Machine Learning
@ Nagarro | Remote, India