Feb. 16, 2022, 2:11 a.m. | Baixu Chen, Junguang Jiang, Ximei Wang, Jianmin Wang, Mingsheng Long

cs.LG updates on arXiv.org arxiv.org

Deep neural networks achieve remarkable performances on a wide range of tasks
with the aid of large-scale labeled datasets. However, large-scale annotations
are time-consuming and labor-exhaustive to obtain on realistic tasks. To
mitigate the requirement for labeled data, self-training is widely used in both
academia and industry by pseudo labeling on readily-available unlabeled data.
Despite its popularity, pseudo labeling is well-believed to be unreliable and
often leads to training instability. Our experimental studies further reveal
that the performance of self-training …

arxiv labeling training

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Data Analyst, Tableau

@ NTT DATA | Bengaluru, KA, IN

Junior Machine Learning Researcher

@ Weill Cornell Medicine | Doha, QA, 24144

Marketing Data Analytics Intern

@ Sloan | Franklin Park, IL, US, 60131

Senior Machine Learning Scientist

@ Adyen | Amsterdam

Data Engineer

@ Craft.co | Warsaw, Mazowieckie