Web: http://arxiv.org/abs/2109.00520

May 6, 2022, 1:12 a.m. | Yan Jia, John McDermid, Tom Lawton, Ibrahim Habli

cs.LG updates on arXiv.org arxiv.org

Established approaches to assuring safety-critical systems and software are
difficult to apply to systems employing ML where there is no clear, pre-defined
specification against which to assess validity. This problem is exacerbated by
the "opaque" nature of ML where the learnt model is not amenable to human
scrutiny. Explainable AI (XAI) methods have been proposed to tackle this issue
by producing human-interpretable representations of ML models which can help
users to gain confidence and build trust in the ML system. …

arxiv explainability healthcare learning machine machine learning role safety

More from arxiv.org / cs.LG updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California