all AI news
Neural Activation Patterns (NAPs): Visual Explainability of Learned Concepts. (arXiv:2206.10611v1 [cs.LG])
June 23, 2022, 1:10 a.m. | Alex Bäuerle, Daniel Jönsson, Timo Ropinski
cs.LG updates on arXiv.org arxiv.org
A key to deciphering the inner workings of neural networks is understanding
what a model has learned. Promising methods for discovering learned features
are based on analyzing activation values, whereby current techniques focus on
analyzing high activation values to reveal interesting features on a neuron
level. However, analyzing high activation values limits layer-level concept
discovery. We present a method that instead takes into account the entire
activation distribution. By extracting similar activation profiles within the
high-dimensional activation space of a …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior ML Researcher - 3D Geometry Processing | 3D Shape Generation | 3D Mesh Data
@ Promaton | Europe
Principal Data Engineer
@ RS21 | Remote
SQL/Power BI Developer
@ ICF | Virginia Remote Office (VA99)
Senior Machine Learning Engineer (Canada Remote)
@ Fullscript | Ottawa, ON
Software Engineer - MLOps.
@ Renesas Electronics | Toyosu, Japan
Junior Data Scientist / Artificial Intelligence consultant
@ Deloitte | Luxembourg, LU