Web: http://arxiv.org/abs/2012.11073

May 4, 2022, 1:12 a.m. | Kensuke Nakamura, Bong-Soo Sohn, Kyoung-Jae Won, Byung-Woo Hong

cs.LG updates on arXiv.org arxiv.org

Regularization is essential for avoiding over-fitting to training data in
network optimization, leading to better generalization of the trained networks.
The label noise provides a strong implicit regularization by replacing the
target ground truth labels of training examples by uniform random labels.
However, it can cause undesirable misleading gradients due to the large loss
associated with incorrect labels. We propose a first-order optimization method
(Label-Noised Trim-SGD) that uses the label noise with the example trimming in
order to remove the …

arxiv gradient network optimization regularization stochastic

More from arxiv.org / cs.LG updates on arXiv.org

Predictive Ecology Postdoctoral Fellow

@ Lawrence Berkeley National Lab | Berkeley, CA

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL