Web: http://arxiv.org/abs/2202.11915

June 23, 2022, 1:13 a.m. | Xihong Yang, Xiaochang Hu, Sihang Zhou, Xinwang Liu, En Zhu

cs.CV updates on arXiv.org arxiv.org

Semi-supervised learning (SSL) has long been proved to be an effective
technique to construct powerful models with limited labels. In the existing
literature, consistency regularization-based methods, which force the perturbed
samples to have similar predictions with the original ones have attracted much
attention for their promising accuracy. However, we observe that, the
performance of such methods decreases drastically when the labels get extremely
limited, e.g., 2 or 3 labels for each category. Our empirical study finds that
the main problem …

arxiv cv learning semi-supervised semi-supervised learning supervised learning

More from arxiv.org / cs.CV updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY