all AI news
Distilled Pruning: Using Synthetic Data to Win the Lottery. (arXiv:2307.03364v3 [cs.LG] UPDATED)
cs.LG updates on arXiv.org arxiv.org
This work introduces a novel approach to pruning deep learning models by
using distilled data. Unlike conventional strategies which primarily focus on
architectural or algorithmic optimization, our method reconsiders the role of
data in these scenarios. Distilled datasets capture essential patterns from
larger datasets, and we demonstrate how to leverage this capability to enable a
computationally efficient pruning process. Our approach can find sparse,
trainable subnetworks (a.k.a. Lottery Tickets) up to 5x faster than Iterative
Magnitude Pruning at comparable sparsity …
algorithmic optimization arxiv data datasets deep learning focus novel optimization patterns pruning role strategies synthetic synthetic data work