Web: http://arxiv.org/abs/2206.11851

June 24, 2022, 1:12 a.m. | Do-Myoung Lee, Yeachan Kim, Chang-gyun Seo

cs.CL updates on arXiv.org arxiv.org

Deep neural networks (DNNs) have a high capacity to completely memorize noisy
labels given sufficient training time, and its memorization, unfortunately,
leads to performance degradation. Recently, virtual adversarial training (VAT)
attracts attention as it could further improve the generalization of DNNs in
semi-supervised learning. The driving force behind VAT is to prevent the models
from overfitting data points by enforcing consistency between the inputs and
the perturbed inputs. This strategy could be helpful in learning from noisy
labels if it …

arxiv classification context labels text text classification training virtual

More from arxiv.org / cs.CL updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY