March 29, 2024, 4:43 a.m. | Chongjie Si, Xuehui Wang, Yan Wang, Xiaokang Yang, Wei Shen

cs.LG updates on arXiv.org arxiv.org

arXiv:2312.11034v3 Announce Type: replace
Abstract: In partial label learning (PLL), each instance is associated with a set of candidate labels among which only one is ground-truth. The majority of the existing works focuses on constructing robust classifiers to estimate the labeling confidence of candidate labels in order to identify the correct one. However, these methods usually struggle to identify and rectify mislabeled samples. To help these mislabeled samples "appeal" for themselves and help existing PLL methods identify and rectify mislabeled …

abstract arxiv chance classifiers confidence cs.lg ground-truth instance labeling labels robust samples set truth type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne