Sept. 1, 2022, 1:11 a.m. | Emilio Dorigatti, Jann Goschenhofer, Benjamin Schubert, Mina Rezaei, Bernd Bischl

cs.LG updates on arXiv.org arxiv.org

Positive-unlabeled (PU) learning aims at learning a binary classifier from
only positive and unlabeled training data. Recent approaches addressed this
problem via cost-sensitive learning by developing unbiased loss functions, and
their performance was later improved by iterative pseudo-labeling solutions.
However, such two-step procedures are vulnerable to incorrectly estimated
pseudo-labels, as errors are propagated in later iterations when a new model is
trained on erroneous predictions. To prevent such confirmation bias, we propose
PUUPL, a novel loss-agnostic training procedure for PU …

arxiv learning positive uncertainty

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

IT Data Engineer

@ Procter & Gamble | BUCHAREST OFFICE

Data Engineer (w/m/d)

@ IONOS | Deutschland - Remote

Staff Data Science Engineer, SMAI

@ Micron Technology | Hyderabad - Phoenix Aquila, India

Academically & Intellectually Gifted Teacher (AIG - Elementary)

@ Wake County Public School System | Cary, NC, United States