Jan. 2, 2022, 10:13 p.m. | Mateo Suffern

Towards Data Science - Medium towardsdatascience.com

What if behind every modern deep neural network a “lottery ticket” was hidden? A much smaller sub-network that when trained it would achieve the same or even better performance that the entire trained network

(Image by author)

In 2019 a paper by Frankle and Carbin[1] appeared with a very intriguing conjecture, based on experimental observation of current large neural networks it seemed that one could grab a small portion of the same network and train it to achieve …

data science deep-neural-networks machine learning networks neural networks

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Vice President, Data Science, Marketplace

@ Xometry | North Bethesda, Maryland, Lexington, KY, Remote

Field Solutions Developer IV, Generative AI, Google Cloud

@ Google | Toronto, ON, Canada; Atlanta, GA, USA