April 10, 2024, 4:43 a.m. | David Steinmann, Wolfgang Stammer, Felix Friedrich, Kristian Kersting

cs.LG updates on arXiv.org arxiv.org

arXiv:2308.13453v2 Announce Type: replace
Abstract: While traditional deep learning models often lack interpretability, concept bottleneck models (CBMs) provide inherent explanations via their concept representations. Specifically, they allow users to perform interventional interactions on these concepts by updating the concept values and thus correcting the predictive output of the model. Traditionally, however, these interventions are applied to the model only once and discarded afterward. To rectify this, we present concept bottleneck memory models (CB2M), an extension to CBMs. Specifically, a CB2M …

abstract arxiv bottlenecks concept concepts cs.ai cs.lg deep learning however interactions interpretability predictive type values via

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne