Web: http://arxiv.org/abs/2206.11602

June 24, 2022, 1:10 a.m. | Xiong Zhou, Xianming Liu, Deming Zhai, Junjun Jiang, Xin Gao, Xiangyang Ji

cs.LG updates on arXiv.org arxiv.org

The success of deep neural networks greatly relies on the availability of
large amounts of high-quality annotated data, which however are difficult or
expensive to obtain. The resulting labels may be class imbalanced, noisy or
human biased. It is challenging to learn unbiased classification models from
imperfectly annotated datasets, on which we usually suffer from overfitting or
underfitting. In this work, we thoroughly investigate the popular softmax loss
and margin-based loss, and offer a feasible approach to tighten the
generalization …

annotations arxiv learning lg

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY