May 25, 2022, 1:10 a.m. | Max Zimmer, Christoph Spiegel, Sebastian Pokutta

cs.LG updates on arXiv.org arxiv.org

Many existing Neural Network pruning approaches either rely on retraining to
compensate for pruning-caused performance degradation or they induce strong
biases to converge to a specific sparse solution throughout training. A third
paradigm obtains a wide range of compression ratios from a single dense
training run while also avoiding retraining. Recent work of Pokutta et al.
(2020) and Miao et al. (2022) suggests that the Stochastic Frank-Wolfe (SFW)
algorithm is particularly suited for training state-of-the-art models that are
robust to …

arxiv compression networks neural networks training

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Intelligence Analyst

@ Rappi | COL-Bogotá

Applied Scientist II

@ Microsoft | Redmond, Washington, United States