all AI news
Regularization-based Pruning of Irrelevant Weights in Deep Neural Architectures. (arXiv:2204.04977v2 [cs.CL] UPDATED)
Oct. 31, 2022, 1:15 a.m. | Giovanni Bonetta, Matteo Ribero, Rossella Cancelliere
cs.CL updates on arXiv.org arxiv.org
Deep neural networks exploiting millions of parameters are nowadays the norm
in deep learning applications. This is a potential issue because of the great
amount of computational resources needed for training, and of the possible loss
of generalization performance of overparametrized networks. We propose in this
paper a method for learning sparse neural topologies via a regularization
technique which identifies non relevant weights and selectively shrinks their
norm, while performing a classic update for relevant ones. This technique,
which is …
architectures arxiv neural architectures pruning regularization
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Data Engineer
@ Quantexa | Sydney, New South Wales, Australia
Staff Analytics Engineer
@ Warner Bros. Discovery | NY New York 230 Park Avenue South