April 22, 2024, 4:42 a.m. | Yuchi Liu, Lei Wang, Yuli Zou, James Zou, Liang Zheng

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.13016v1 Announce Type: cross
Abstract: Model calibration aims to align confidence with prediction correctness. The Cross-Entropy CE) loss is widely used for calibrator training, which enforces the model to increase confidence on the ground truth class. However, we find the CE loss has intrinsic limitations. For example, for a narrow misclassification, a calibrator trained by the CE loss often produces high confidence on the wrongly predicted class (e.g., a test sample is wrongly classified and its softmax score on the …

abstract arxiv class confidence cross-entropy cs.cv cs.lg entropy example however intrinsic limitations loss narrow prediction stat.ml training truth type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York