March 19, 2024, 4:41 a.m. | Eugene Ku

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.10807v1 Announce Type: new
Abstract: Knowledge Distillation (KD) aims to transfer a more capable teacher model's knowledge to a lighter student model in order to improve the efficiency of the model, making it faster and more deployable. However, the student model's optimization process over the noisy pseudo labels (generated by the teacher model) is tricky and the amount of pseudo labels one can generate is limited due to Out of Memory (OOM) error. In this paper, we propose FlyKD (Knowledge …

abstract arxiv cs.lg curriculum curriculum learning distillation efficiency faster fly generated graph however knowledge labels making optimization process transfer type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist, Demography and Survey Science, University Grad

@ Meta | Menlo Park, CA | New York City

Computer Vision Engineer, XR

@ Meta | Burlingame, CA