all AI news
Inference for Interpretable Machine Learning: Fast, Model-Agnostic Confidence Intervals for Feature Importance. (arXiv:2206.02088v1 [stat.ML])
stat.ML updates on arXiv.org arxiv.org
In order to trust machine learning for high-stakes problems, we need models
to be both reliable and interpretable. Recently, there has been a growing body
of work on interpretable machine learning which generates human understandable
insights into data, models, or predictions. At the same time, there has been
increased interest in quantifying the reliability and uncertainty of machine
learning predictions, often in the form of confidence intervals for predictions
using conformal inference. Yet, there has been relatively little attention
given …
arxiv confidence feature importance inference learning machine machine learning ml model-agnostic