March 13, 2024, 4:43 a.m. | Emanuel Ben-Baruch, Adam Botach, Igor Kviatkovsky, Manoj Aggarwal, G\'erard Medioni

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.07854v1 Announce Type: cross
Abstract: With the increasing size of datasets used for training neural networks, data pruning becomes an attractive field of research. However, most current data pruning algorithms are limited in their ability to preserve accuracy compared to models trained on the full data, especially in high pruning regimes. In this paper we explore the application of data pruning while incorporating knowledge distillation (KD) when training on a pruned subset. That is, rather than relying solely on ground-truth …

abstract accuracy algorithms arxiv cs.cv cs.lg current data data pruning datasets however knowledge networks neural networks paper pruning research training type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Principal, Product Strategy Operations, Cloud Data Analytics

@ Google | Sunnyvale, CA, USA; Austin, TX, USA

Data Scientist - HR BU

@ ServiceNow | Hyderabad, India