Oct. 17, 2022, 1:13 a.m. | Teodora Popordanoska, Raphael Sayer, Matthew B. Blaschko

stat.ML updates on arXiv.org arxiv.org

Calibrated probabilistic classifiers are models whose predicted probabilities
can directly be interpreted as uncertainty estimates. It has been shown
recently that deep neural networks are poorly calibrated and tend to output
overconfident predictions. As a remedy, we propose a low-bias, trainable
calibration error estimator based on Dirichlet kernel density estimates, which
asymptotically converges to the true $L_p$ calibration error. This novel
estimator enables us to tackle the strongest notion of multiclass calibration,
called canonical (or distribution) calibration, while other common …

arxiv canonical consistent error

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne