all AI news
Meaningfully Debugging Model Mistakes using Conceptual Counterfactual Explanations. (arXiv:2106.12723v3 [cs.LG] UPDATED)
Web: http://arxiv.org/abs/2106.12723
June 16, 2022, 1:11 a.m. | Abubakar Abid, Mert Yuksekgonul, James Zou
cs.LG updates on arXiv.org arxiv.org
Understanding and explaining the mistakes made by trained models is critical
to many machine learning objectives, such as improving robustness, addressing
concept drift, and mitigating biases. However, this is often an ad hoc process
that involves manually looking at the model's mistakes on many test samples and
guessing at the underlying reasons for those incorrect predictions. In this
paper, we propose a systematic approach, conceptual counterfactual explanations
(CCE), that explains why a classifier makes a mistake on a particular test …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
Data Science Intern
@ NannyML | Remote
Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY