Sept. 1, 2022, 1:11 a.m. | Emilio Dorigatti, Jann Goschenhofer, Benjamin Schubert, Mina Rezaei, Bernd Bischl

stat.ML updates on arXiv.org arxiv.org

Positive-unlabeled (PU) learning aims at learning a binary classifier from
only positive and unlabeled training data. Recent approaches addressed this
problem via cost-sensitive learning by developing unbiased loss functions, and
their performance was later improved by iterative pseudo-labeling solutions.
However, such two-step procedures are vulnerable to incorrectly estimated
pseudo-labels, as errors are propagated in later iterations when a new model is
trained on erroneous predictions. To prevent such confirmation bias, we propose
PUUPL, a novel loss-agnostic training procedure for PU …

arxiv learning positive uncertainty

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analytics & Insight Specialist, Customer Success

@ Fortinet | Ottawa, ON, Canada

Account Director, ChatGPT Enterprise - Majors

@ OpenAI | Remote - Paris