all AI news
Compression-aware Training of Neural Networks using Frank-Wolfe
Feb. 15, 2024, 5:43 a.m. | Max Zimmer, Christoph Spiegel, Sebastian Pokutta
cs.LG updates on arXiv.org arxiv.org
Abstract: Many existing Neural Network pruning approaches rely on either retraining or inducing a strong bias in order to converge to a sparse solution throughout training. A third paradigm, 'compression-aware' training, aims to obtain state-of-the-art dense models that are robust to a wide range of compression ratios using a single dense training run while also avoiding retraining. We propose a framework centered around a versatile family of norm constraints and the Stochastic Frank-Wolfe (SFW) algorithm that …
abstract art arxiv bias compression converge cs.lg math.oc network networks neural network neural networks paradigm pruning retraining robust solution state training type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Director, Global Success Business Intelligence
@ Salesforce | Texas - Austin
Deep Learning Compiler Engineer - MLIR
@ NVIDIA | US, CA, Santa Clara
Commerce Data Engineer (Remote)
@ CrowdStrike | USA TX Remote