March 25, 2024, 4:41 a.m. | Guoxuan Xia, Olivier Laurent, Gianni Franchi, Christos-Savvas Bouganis

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.14715v1 Announce Type: new
Abstract: Label smoothing (LS) is a popular regularisation method for training deep neural network classifiers due to its effectiveness in improving test accuracy and its simplicity in implementation. "Hard" one-hot labels are "smoothed" by uniformly distributing probability mass to other classes, reducing overfitting. In this work, we reveal that LS negatively affects selective classification (SC) - where the aim is to reject misclassifications using a model's predictive uncertainty. We first demonstrate empirically across a range of …

abstract accuracy arxiv classification classifiers cs.ai cs.cv cs.lg deep neural network hot implementation improving labels network neural network overfitting popular probability simplicity test training type understanding

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US