Web: http://arxiv.org/abs/2110.09192

May 9, 2022, 1:10 a.m. | David Stutz, Krishnamurthy (Dj) Dvijotham, Ali Taylan Cemgil, Arnaud Doucet

stat.ML updates on arXiv.org arxiv.org

Modern deep learning based classifiers show very high accuracy on test data
but this does not provide sufficient guarantees for safe deployment, especially
in high-stake AI applications such as medical diagnosis. Usually, predictions
are obtained without a reliable uncertainty estimate or a formal guarantee.
Conformal prediction (CP) addresses these issues by using the classifier's
predictions, e.g., its probability estimates, to predict confidence sets
containing the true class with a user-specified probability. However, using CP
as a separate processing step after …

arxiv learning

More from arxiv.org / stat.ML updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California