all AI news
Positive-Unlabeled Learning with Uncertainty-aware Pseudo-label Selection. (arXiv:2201.13192v2 [stat.ML] UPDATED)
Sept. 1, 2022, 1:11 a.m. | Emilio Dorigatti, Jann Goschenhofer, Benjamin Schubert, Mina Rezaei, Bernd Bischl
stat.ML updates on arXiv.org arxiv.org
Positive-unlabeled (PU) learning aims at learning a binary classifier from
only positive and unlabeled training data. Recent approaches addressed this
problem via cost-sensitive learning by developing unbiased loss functions, and
their performance was later improved by iterative pseudo-labeling solutions.
However, such two-step procedures are vulnerable to incorrectly estimated
pseudo-labels, as errors are propagated in later iterations when a new model is
trained on erroneous predictions. To prevent such confirmation bias, we propose
PUUPL, a novel loss-agnostic training procedure for PU …
More from arxiv.org / stat.ML updates on arXiv.org
Estimation Sample Complexity of a Class of Nonlinear Continuous-time Systems
2 days, 13 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Analytics & Insight Specialist, Customer Success
@ Fortinet | Ottawa, ON, Canada
Account Director, ChatGPT Enterprise - Majors
@ OpenAI | Remote - Paris