Web: http://arxiv.org/abs/2105.06506

June 20, 2022, 1:11 a.m. | Joon Sik Kim, Gregory Plumb, Ameet Talwalkar

cs.LG updates on arXiv.org arxiv.org

Saliency methods are a popular class of feature attribution explanation
methods that aim to capture a model's predictive reasoning by identifying
"important" pixels in an input image. However, the development and adoption of
these methods are hindered by the lack of access to ground-truth model
reasoning, which prevents accurate evaluation. In this work, we design a
synthetic benchmarking framework, SMERF, that allows us to perform
ground-truth-based evaluation while controlling the complexity of the model's
reasoning. Experimentally, SMERF reveals significant limitations …

arxiv lg simulations

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY