Nov. 10, 2022, 2:14 a.m. | Baixu Chen, Junguang Jiang, Ximei Wang, Pengfei Wan, Jianmin Wang, Mingsheng Long

cs.CV updates on arXiv.org arxiv.org

Deep neural networks achieve remarkable performances on a wide range of tasks
with the aid of large-scale labeled datasets. Yet these datasets are
time-consuming and labor-exhaustive to obtain on realistic tasks. To mitigate
the requirement for labeled data, self-training is widely used in
semi-supervised learning by iteratively assigning pseudo labels to unlabeled
samples. Despite its popularity, self-training is well-believed to be
unreliable and often leads to training instability. Our experimental studies
further reveal that the bias in semi-supervised learning arises …

arxiv self-training semi-supervised semi-supervised learning supervised learning training

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Scientist, gTech Ads

@ Google | Mexico City, CDMX, Mexico

Lead, Data Analytics Operations

@ Zocdoc | Pune, Maharashtra, India