all AI news
Debiased Pseudo Labeling in Self-Training. (arXiv:2202.07136v1 [cs.LG])
Feb. 16, 2022, 2:11 a.m. | Baixu Chen, Junguang Jiang, Ximei Wang, Jianmin Wang, Mingsheng Long
cs.LG updates on arXiv.org arxiv.org
Deep neural networks achieve remarkable performances on a wide range of tasks
with the aid of large-scale labeled datasets. However, large-scale annotations
are time-consuming and labor-exhaustive to obtain on realistic tasks. To
mitigate the requirement for labeled data, self-training is widely used in both
academia and industry by pseudo labeling on readily-available unlabeled data.
Despite its popularity, pseudo labeling is well-believed to be unreliable and
often leads to training instability. Our experimental studies further reveal
that the performance of self-training …
More from arxiv.org / cs.LG updates on arXiv.org
Regularization by Texts for Latent Diffusion Inverse Solvers
1 day, 15 hours ago |
arxiv.org
When can transformers reason with abstract symbols?
1 day, 15 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Scientist (m/f/x/d)
@ Symanto Research GmbH & Co. KG | Spain, Germany
Data Analyst, Tableau
@ NTT DATA | Bengaluru, KA, IN
Junior Machine Learning Researcher
@ Weill Cornell Medicine | Doha, QA, 24144
Marketing Data Analytics Intern
@ Sloan | Franklin Park, IL, US, 60131
Senior Machine Learning Scientist
@ Adyen | Amsterdam
Data Engineer
@ Craft.co | Warsaw, Mazowieckie