April 22, 2024, 4:42 a.m. | Yuchi Liu, Lei Wang, Yuli Zou, James Zou, Liang Zheng

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.13016v1 Announce Type: cross
Abstract: Model calibration aims to align confidence with prediction correctness. The Cross-Entropy CE) loss is widely used for calibrator training, which enforces the model to increase confidence on the ground truth class. However, we find the CE loss has intrinsic limitations. For example, for a narrow misclassification, a calibrator trained by the CE loss often produces high confidence on the wrongly predicted class (e.g., a test sample is wrongly classified and its softmax score on the …

abstract arxiv class confidence cross-entropy cs.cv cs.lg entropy example however intrinsic limitations loss narrow prediction stat.ml training truth type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Data Engineer (m/f/d)

@ Project A Ventures | Berlin, Germany

Principle Research Scientist

@ Analog Devices | US, MA, Boston