all AI news
Black Box Model Explanations and the Human Interpretability Expectations -- An Analysis in the Context of Homicide Prediction. (arXiv:2210.10849v1 [cs.LG])
Oct. 21, 2022, 1:12 a.m. | José Ribeiro, Níkolas Carneiro, Ronnie Alves
cs.LG updates on arXiv.org arxiv.org
Strategies based on Explainable Artificial Intelligence - XAI have promoted
better human interpretability of the results of black box machine learning
models. The XAI measures being currently used (Ciu, Dalex, Eli5, Lofo, Shap,
and Skater) provide various forms of explanations, including global rankings of
relevance of attributes. Current research points to the need for further
studies on how these explanations meet the Interpretability Expectations of
human experts and how they can be used to make the model even more transparent …
analysis arxiv black box context human interpretability prediction
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Machine Learning Engineer (m/f/d)
@ StepStone Group | Düsseldorf, Germany
2024 GDIA AI/ML Scientist - Supplemental
@ Ford Motor Company | United States