Web: http://arxiv.org/abs/2205.03398

May 9, 2022, 1:11 a.m. | Ulrike Kuhl, André Artelt, Barbara Hammer

cs.LG updates on arXiv.org arxiv.org

To foster usefulness and accountability of machine learning (ML), it is
essential to explain a model's decisions in addition to evaluating its
performance. Accordingly, the field of explainable artificial intelligence
(XAI) has resurfaced as a topic of active research, offering approaches to
address the "how" and "why" of automated decision-making. Within this domain,
counterfactual explanations (CFEs) have gained considerable traction as a
psychologically grounded approach to generate post-hoc explanations. To do so,
CFEs highlight what changes to a model's input …

arxiv experimental framework go go to learning machine machine learning study

More from arxiv.org / cs.LG updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California