Web: https://www.reddit.com/r/MachineLearning/comments/uixbwk/r_exsum_from_local_explanations_to_model/

May 5, 2022, 1:21 p.m. | /u/zyl1024

Machine Learning reddit.com

Excited to share our latest research on model interpretability, to appear at NAACL this summer.

In this paper, we reflect on local model explanations (e.g. LIME, SHAP, gradient saliency), and think about how people actually use them to derive high-level model understanding (e.g. is the model relying on spurious correlation, is it biased, can I trust it).

Obviously, they need to be correct (or faithful), which has been the focus of many interpretability evaluations. However, we argue that they also …

machinelearning model

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California