Oct. 7, 2022, 1:14 a.m. | A. Michael Carrell, Neil Mallinar, James Lucas, Preetum Nakkiran

stat.ML updates on arXiv.org arxiv.org

Calibration is a fundamental property of a good predictive model: it requires
that the model predicts correctly in proportion to its confidence. Modern
neural networks, however, provide no strong guarantees on their calibration --
and can be either poorly calibrated or well-calibrated depending on the
setting. It is currently unclear which factors contribute to good calibration
(architecture, data augmentation, overparameterization, etc), though various
claims exist in the literature.


We propose a systematic way to study the calibration error: by decomposing …

arxiv gap

More from arxiv.org / stat.ML updates on arXiv.org

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Management Assistant

@ World Vision | Amman Office, Jordan

Cloud Data Engineer, Global Services Delivery, Google Cloud

@ Google | Buenos Aires, Argentina