Aug. 10, 2023, 4:44 a.m. | Luke McDermott, Daniel Cummings

cs.LG updates on arXiv.org arxiv.org

This work introduces a novel approach to pruning deep learning models by
using distilled data. Unlike conventional strategies which primarily focus on
architectural or algorithmic optimization, our method reconsiders the role of
data in these scenarios. Distilled datasets capture essential patterns from
larger datasets, and we demonstrate how to leverage this capability to enable a
computationally efficient pruning process. Our approach can find sparse,
trainable subnetworks (a.k.a. Lottery Tickets) up to 5x faster than Iterative
Magnitude Pruning at comparable sparsity …

algorithmic optimization arxiv data datasets deep learning focus novel optimization patterns pruning role strategies synthetic synthetic data work

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US