all AI news
De-confounded Data-free Knowledge Distillation for Handling Distribution Shifts
March 29, 2024, 4:45 a.m. | Yuzheng Wang, Dingkang Yang, Zhaoyu Chen, Yang Liu, Siao Liu, Wenqiang Zhang, Lihua Zhang, Lizhe Qi
cs.CV updates on arXiv.org arxiv.org
Abstract: Data-Free Knowledge Distillation (DFKD) is a promising task to train high-performance small models to enhance actual deployment without relying on the original training data. Existing methods commonly avoid relying on private data by utilizing synthetic or sampled data. However, a long-overlooked issue is that the severe distribution shifts between their substitution and original data, which manifests as huge differences in the quality of images and class proportions. The harmful shifts are essentially the confounder that …
abstract arxiv cs.cv data deployment distillation distribution free however issue knowledge performance private data small synthetic train training training data type
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
DevOps Engineer (Data Team)
@ Reward Gateway | Sofia/Plovdiv