all AI news
Faculty Distillation with Optimal Transport. (arXiv:2204.11526v2 [cs.LG] UPDATED)
Web: http://arxiv.org/abs/2204.11526
June 17, 2022, 1:11 a.m. | Su Lu, Han-Jia Ye, De-Chuan Zhan
cs.LG updates on arXiv.org arxiv.org
The outpouring of various pre-trained models empowers knowledge
distillation~(KD) by providing abundant teacher resources. Meanwhile, exploring
the massive model repository to select a suitable teacher and further
extracting its knowledge become daunting challenges. Standard KD fails to
surmount two obstacles when training a student with the availability of
plentiful pre-trained teachers, i.e., the "faculty". First, we need to seek out
the most contributive teacher in the faculty efficiently rather than
enumerating all of them for a student. Second, since the …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
Data Science Intern
@ NannyML | Remote
Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY