all AI news
On the Usefulness of the Fit-on-the-Test View on Evaluating Calibration of Classifiers. (arXiv:2203.08958v3 [cs.LG] UPDATED)
Web: http://arxiv.org/abs/2203.08958
May 4, 2022, 1:12 a.m. | Markus Kängsepp, Kaspar Valk, Meelis Kull
cs.LG updates on arXiv.org arxiv.org
Every uncalibrated classifier has a corresponding true calibration map that
calibrates its confidence. Deviations of this idealistic map from the identity
map reveal miscalibration. Such calibration errors can be reduced with many
post-hoc calibration methods which fit some family of calibration maps on a
validation dataset. In contrast, evaluation of calibration with the expected
calibration error (ECE) on the test set does not explicitly involve fitting.
However, as we demonstrate, ECE can still be viewed as if fitting a family …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Data Analyst, Patagonia Action Works
@ Patagonia | Remote
Data & Insights Strategy & Innovation General Manager
@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX
Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis
@ Ahmedabad University | Ahmedabad, India
Director, Applied Mathematics & Computational Research Division
@ Lawrence Berkeley National Lab | Berkeley, Ca
Business Data Analyst
@ MainStreet Family Care | Birmingham, AL
Assistant/Associate Professor of the Practice in Business Analytics
@ Georgetown University McDonough School of Business | Washington DC