Web: http://arxiv.org/abs/2205.05057

May 11, 2022, 1:11 a.m. | Harmanpreet Kaur, Eytan Adar, Eric Gilbert, Cliff Lampe

cs.LG updates on arXiv.org arxiv.org

Understanding how ML models work is a prerequisite for responsibly designing,
deploying, and using ML-based systems. With interpretability approaches, ML can
now offer explanations for its outputs to aid human understanding. Though these
approaches rely on guidelines for how humans explain things to each other, they
ultimately solve for improving the artifact -- an explanation. In this paper,
we propose an alternate framework for interpretability grounded in Weick's
sensemaking theory, which focuses on who the explanation is intended for.
Recent …

ai arxiv explainability interpretability theory

More from arxiv.org / cs.LG updates on arXiv.org

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC