Web: http://arxiv.org/abs/2206.07918

June 17, 2022, 1:10 a.m. | Zhimin Li, Shusen Liu, Xin Yu, Kailkhura Bhavya, Jie Cao, Diffenderfer James Daniel, Peer-Timo Bremer, Valerio Pascucci

cs.LG updates on arXiv.org arxiv.org

Deep learning approaches have provided state-of-the-art performance in many
applications by relying on extremely large and heavily overparameterized neural
networks. However, such networks have been shown to be very brittle, not
generalize well to new uses cases, and are often difficult if not impossible to
deploy on resources limited platforms. Model pruning, i.e., reducing the size
of the network, is a widely adopted strategy that can lead to more robust and
generalizable network -- usually orders of magnitude smaller with …

analysis arxiv network neural neural network pruning robustness

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY