Web: http://arxiv.org/abs/2206.07277

June 16, 2022, 1:10 a.m. | Jongwoo Ko, Bongsoo Yi, Se-Young Yun

cs.LG updates on arXiv.org arxiv.org

As label noise, one of the most popular distribution shifts, severely
degrades deep neural networks' generalization performance, robust training with
noisy labels is becoming an important task in modern deep learning. In this
paper, we propose our framework, coined as Adaptive LAbel smoothing on
Sub-ClAssifier (ALASCA), that provides a robust feature extractor with
theoretical guarantee and negligible additional computation. First, we derive
that the label smoothing (LS) incurs implicit Lipschitz regularization (LR).
Furthermore, based on these derivations, we apply the …

arxiv deep deep learning learning lg noise

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY