Nov. 23, 2022, 2:15 a.m. | Alexander Binder, Leander Weber, Sebastian Lapuschkin, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek

cs.CV updates on arXiv.org arxiv.org

While the evaluation of explanations is an important step towards trustworthy
models, it needs to be done carefully, and the employed metrics need to be
well-understood. Specifically model randomization testing is often
overestimated and regarded as a sole criterion for selecting or discarding
certain explanation methods. To address shortcomings of this test, we start by
observing an experimental gap in the ranking of explanation methods between
randomization-based sanity checks [1] and model output faithfulness measures
(e.g. [25]). We identify limitations …

arxiv checks deep neural network network neural network randomization top

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Alternant Data Engineering

@ Aspire Software | Angers, FR

Senior Software Engineer, Generative AI

@ Google | Dublin, Ireland