all AI news
Responsibility: An Example-based Explainable AI approach via Training Process Inspection. (arXiv:2209.03433v1 [cs.LG])
Sept. 9, 2022, 1:11 a.m. | Faraz Khadivpour, Arghasree Banerjee, Matthew Guzdial
cs.LG updates on arXiv.org arxiv.org
Explainable Artificial Intelligence (XAI) methods are intended to help human
users better understand the decision making of an AI agent. However, many
modern XAI approaches are unintuitive to end users, particularly those without
prior AI or ML knowledge. In this paper, we present a novel XAI approach we
call Responsibility that identifies the most responsible training example for a
particular decision. This example can then be shown as an explanation: "this is
what I (the AI) learned that led me …
arxiv example explainable ai process responsibility training
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior ML Researcher - 3D Geometry Processing | 3D Shape Generation | 3D Mesh Data
@ Promaton | Europe
Data Scientist
@ Motive | India - Remote
Senior Perception Engineer
@ NVIDIA | US, CA, Santa Clara
Business Data Analyst, Finance and Treasury Data Repositories, Senior Associate
@ State Street | Krakow, Poland
Junior AI Engineer (Internship)
@ Sony | SEU - Italy - Roma
Manager, Data Science 3
@ PayPal | USA - Pennsylvania - Virtual