Oct. 6, 2022, 1:11 a.m. | Adam Scherlis, Kshitij Sachan, Adam S. Jermyn, Joe Benton, Buck Shlegeris

cs.LG updates on arXiv.org arxiv.org

Individual neurons in neural networks often represent a mixture of unrelated
features. This phenomenon, called polysemanticity, can make interpreting neural
networks more difficult and so we aim to understand its causes. We propose
doing so through the lens of feature \emph{capacity}, which is the fractional
dimension each feature consumes in the embedding space. We show that in a toy
model the optimal capacity allocation tends to monosemantically represent the
most important features, polysemantically represent less important features (in
proportion to …

arxiv capacity networks neural networks

(373) Applications Manager – Business Intelligence - BSTD

@ South African Reserve Bank | South Africa

Data Engineer Talend (confirmé/sénior) - H/F - CDI

@ Talan | Paris, France

Data Science Intern (Summer) / Stagiaire en données (été)

@ BetterSleep | Montreal, Quebec, Canada

Director - Master Data Management (REMOTE)

@ Wesco | Pittsburgh, PA, United States

Architect Systems BigData REF2649A

@ Deutsche Telekom IT Solutions | Budapest, Hungary

Data Product Coordinator

@ Nestlé | São Paulo, São Paulo, BR, 04730-000