all AI news
Training Your Sparse Neural Network Better with Any Mask. (arXiv:2206.12755v2 [cs.CV] UPDATED)
June 29, 2022, 1:13 a.m. | Ajay Jaiswal, Haoyu Ma, Tianlong Chen, Ying Ding, Zhangyang Wang
cs.CV updates on arXiv.org arxiv.org
Pruning large neural networks to create high-quality, independently trainable
sparse masks, which can maintain similar performance to their dense
counterparts, is very desirable due to the reduced space and time complexity.
As research effort is focused on increasingly sophisticated pruning methods
that leads to sparse subnetworks trainable from the scratch, we argue for an
orthogonal, under-explored theme: improving training techniques for pruned
sub-networks, i.e. sparse training. Apart from the popular belief that only the
quality of sparse masks matters for …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Strategy & Management - Private Equity Sector - Manager - Consulting - Location OPEN
@ EY | New York City, US, 10001-8604
Data Engineer- People Analytics
@ Volvo Group | Gothenburg, SE, 40531