Nov. 1, 2022, 1:11 a.m. | Zheyang Xiong, Fangshuo Liao, Anastasios Kyrillidis

cs.LG updates on arXiv.org arxiv.org

The strong Lottery Ticket Hypothesis (LTH) claims the existence of a
subnetwork in a sufficiently large, randomly initialized neural network that
approximates some target neural network without the need of training. We extend
the theoretical guarantee of the strong LTH literature to a scenario more
similar to the original LTH, by generalizing the weight change in the
pre-training step to some perturbation around initialization. In particular, we
focus on the following open questions: By allowing an $\varepsilon$-scale
perturbation on the …

arxiv hypothesis lottery ticket hypothesis

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Science Analyst

@ Mayo Clinic | AZ, United States

Sr. Data Scientist (Network Engineering)

@ SpaceX | Redmond, WA