April 8, 2024, 4:42 a.m. | Angus Nicolson, Lisa Schut, J. Alison Noble, Yarin Gal

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.03713v1 Announce Type: new
Abstract: Recent interpretability methods propose using concept-based explanations to translate the internal representations of deep learning models into a language that humans are familiar with: concepts. This requires understanding which concepts are present in the representation space of a neural network. One popular method for finding concepts is Concept Activation Vectors (CAVs), which are learnt using a probe dataset of concept exemplars. In this work, we investigate three properties of CAVs. CAVs may be: (1) inconsistent …

abstract arxiv concept concepts cs.ai cs.cv cs.hc cs.lg deep learning explainability humans interpretability language network neural network popular representation space translate type understanding vectors

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Robotics Technician - 3rd Shift

@ GXO Logistics | Perris, CA, US, 92571