Nov. 16, 2022, 2:11 a.m. | Shea Cardozo, Gabriel Islas Montero, Dmitry Kazhdan, Botty Dimanov, Maleakhi Wijaya, Mateja Jamnik, Pietro Lio

cs.LG updates on arXiv.org arxiv.org

Recent work has suggested post-hoc explainers might be ineffective for
detecting spurious correlations in Deep Neural Networks (DNNs). However, we
show there are serious weaknesses with the existing evaluation frameworks for
this setting. Previously proposed metrics are extremely difficult to interpret
and are not directly comparable between explainer methods. To alleviate these
constraints, we propose a new evaluation methodology, Explainer Divergence
Scores (EDS), grounded in an information theory approach to evaluate
explainers. EDS is easy to interpret and naturally comparable …

arxiv divergence explainer

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne