April 3, 2024, 4:42 a.m. | Magamed Taimeskhanov, Ronan Sicre, Damien Garreau

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.01964v1 Announce Type: cross
Abstract: CAM-based methods are widely-used post-hoc interpretability method that produce a saliency map to explain the decision of an image classification model. The saliency map highlights the important areas of the image relevant to the prediction. In this paper, we show that most of these methods can incorrectly attribute an important score to parts of the image that the model cannot see. We show that this phenomenon occurs both theoretically and experimentally. On the theory side, …

abstract arxiv classification classification model cs.cv cs.lg decision highlights image interpretability map paper prediction show through type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne