Web: http://arxiv.org/abs/2205.02321

May 6, 2022, 1:11 a.m. | Rebekka Burkholz

cs.LG updates on arXiv.org arxiv.org

The strong lottery ticket hypothesis has highlighted the potential for
training deep neural networks by pruning, which has inspired interesting
practical and theoretical insights into how neural networks can represent
functions. For networks with ReLU activation functions, it has been proven that
a target network with depth $L$ can be approximated by the subnetwork of a
randomly initialized neural network that has double the target's depth $2L$ and
is wider by a logarithmic factor. We show that a depth $L+1$ …


More from arxiv.org / cs.LG updates on arXiv.org

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC