Jan. 3, 2022, 2:10 a.m. | Sílvia Casacuberta, Esra Suel, Seth Flaxman

cs.LG updates on arXiv.org arxiv.org

In this paper we introduce a new problem within the growing literature of
interpretability for convolution neural networks (CNNs). While previous work
has focused on the question of how to visually interpret CNNs, we ask what it
is that we care to interpret, that is, which layers and neurons are worth our
attention? Due to the vast size of modern deep learning network architectures,
automated, quantitative methods are needed to rank the relative importance of
neurons so as to provide …

arxiv cv interpretability neurons statistical

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Technology Consultant Master Data Management (w/m/d)

@ SAP | Walldorf, DE, 69190

Research Engineer, Computer Vision, Google Research

@ Google | Nairobi, Kenya