all AI news
Verifying the Selected Completely at Random Assumption in Positive-Unlabeled Learning
April 2, 2024, 7:43 p.m. | Pawe{\l} Teisseyre, Konrad Furma\'nczyk, Jan Mielniczuk
cs.LG updates on arXiv.org arxiv.org
Abstract: The goal of positive-unlabeled (PU) learning is to train a binary classifier on the basis of training data containing positive and unlabeled instances, where unlabeled observations can belong either to the positive class or to the negative class. Modeling PU data requires certain assumptions on the labeling mechanism that describes which positive observations are assigned a label. The simplest assumption, considered in early works, is SCAR (Selected Completely at Random Assumption), according to which the …
abstract arxiv assumptions binary class classifier cs.lg data instances modeling negative positive random stat.ml train training training data type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Sr. BI Analyst
@ AkzoNobel | Pune, IN