Oct. 6, 2022, 1:13 a.m. | Jinxin Zhou, Chong You, Xiao Li, Kangning Liu, Sheng Liu, Qing Qu, Zhihui Zhu

stat.ML updates on arXiv.org arxiv.org

While cross entropy (CE) is the most commonly used loss to train deep neural
networks for classification tasks, many alternative losses have been developed
to obtain better empirical performance. Among them, which one is the best to
use is still a mystery, because there seem to be multiple factors affecting the
answer, such as properties of the dataset, the choice of network architecture,
and so on. This paper studies the choice of loss function by examining the
last-layer features of …

arxiv losses neural collapse perspective

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US