Web: http://arxiv.org/abs/2206.08175

June 17, 2022, 1:11 a.m. | Surya Kant Sahu, Sai Mitheran, Somya Suhans Mahapatra

cs.LG updates on arXiv.org arxiv.org

The Lottery Ticket Hypothesis (LTH) states that for a reasonably sized neural
network, a sub-network within the same network yields no less performance than
the dense counterpart when trained from the same initialization. This work
investigates the relation between model size and the ease of finding these
sparse sub-networks. We show through experiments that, surprisingly, under a
finite budget, smaller models benefit more from Ticket Search (TS).

arxiv lg

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY