Web: http://arxiv.org/abs/2209.06993

Sept. 20, 2022, 1:13 a.m. | Ye Du, Yujun Shen, Haochen Wang, Jingjing Fei, Wei Li, Liwei Wu, Rui Zhao, Zehua Fu, Qingjie Liu

cs.CV updates on arXiv.org arxiv.org

Self-training has shown great potential in semi-supervised learning. Its core
idea is to use the model learned on labeled data to generate pseudo-labels for
unlabeled samples, and in turn teach itself. To obtain valid supervision,
active attempts typically employ a momentum teacher for pseudo-label prediction
yet observe the confirmation bias issue, where the incorrect predictions may
provide wrong supervision signals and get accumulated in the training process.
The primary cause of such a drawback is that the prevailing self-training
framework …

arxiv framework future segmentation self-training semantic training

More from arxiv.org / cs.CV updates on arXiv.org

Postdoctoral Fellow: ML for autonomous materials discovery

@ Lawrence Berkeley National Lab | Berkeley, CA

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Research Engineer - VFX, Neural Compositing

@ Flawless | Los Angeles, California, United States

[Job-TB] Senior Data Engineer

@ CI&T | Brazil

Data Analytics Engineer

@ The Fork | Paris, France