Web: http://arxiv.org/abs/2110.03321

May 13, 2022, 1:10 a.m. | Amanda Olmin, Fredrik Lindsten

stat.ML updates on arXiv.org arxiv.org

Labelling of data for supervised learning can be costly and time-consuming
and the risk of incorporating label noise in large data sets is imminent. When
training a flexible discriminative model using a strictly proper loss, such
noise will inevitably shift the solution towards the conditional distribution
over noisy labels. Nevertheless, while deep neural networks have proven capable
of fitting random labels, regularisation and the use of robust loss functions
empirically mitigate the effects of label noise. However, such observations
concern …

arxiv labels ml reliability robustness training

More from arxiv.org / stat.ML updates on arXiv.org

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote