June 7, 2022, 1:11 a.m. | Luqin Gan, Lili Zheng, Genevera I. Allen

stat.ML updates on arXiv.org arxiv.org

In order to trust machine learning for high-stakes problems, we need models
to be both reliable and interpretable. Recently, there has been a growing body
of work on interpretable machine learning which generates human understandable
insights into data, models, or predictions. At the same time, there has been
increased interest in quantifying the reliability and uncertainty of machine
learning predictions, often in the form of confidence intervals for predictions
using conformal inference. Yet, there has been relatively little attention
given …

arxiv confidence feature importance inference learning machine machine learning ml model-agnostic

More from arxiv.org / stat.ML updates on arXiv.org

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Principal Data Architect - Azure & Big Data

@ MGM Resorts International | Home Office - US, NV

GN SONG MT Market Research Data Analyst 11

@ Accenture | Bengaluru, BDC7A