April 22, 2024, 4:41 a.m. | Jifeng Guo, Zhulin Liu, Tong Zhang, C. L. Philip Chen

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.12398v1 Announce Type: new
Abstract: Semi-supervised learning provides a solution to reduce the dependency of machine learning on labeled data. As one of the efficient semi-supervised techniques, self-training (ST) has received increasing attention. Several advancements have emerged to address challenges associated with noisy pseudo-labels. Previous works on self-training acknowledge the importance of unlabeled data but have not delved into their efficient utilization, nor have they paid attention to the problem of high time consumption caused by iterative learning. This paper …

abstract arxiv attention challenges cs.lg data importance incremental labels machine machine learning reduce self-training semi-supervised semi-supervised learning solution supervised learning training type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne