July 5, 2022, 1:13 a.m. | Nathan Hubens, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia

cs.CV updates on arXiv.org arxiv.org

Introducing sparsity in a neural network has been an efficient way to reduce
its complexity while keeping its performance almost intact. Most of the time,
sparsity is introduced using a three-stage pipeline: 1) train the model to
convergence, 2) prune the model according to some criterion, 3) fine-tune the
pruned model to recover performance. The last two steps are often performed
iteratively, leading to reasonable results but also to a time-consuming and
complex process. In our work, we propose to …

arxiv budget cv pruning training

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Intelligence Analyst

@ Rappi | COL-Bogotá

Applied Scientist II

@ Microsoft | Redmond, Washington, United States