April 19, 2024, 4:41 a.m. | Paul Hofman, Yusuf Sale, Eyke H\"ullermeier

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.12215v1 Announce Type: new
Abstract: Uncertainty representation and quantification are paramount in machine learning and constitute an important prerequisite for safety-critical applications. In this paper, we propose novel measures for the quantification of aleatoric and epistemic uncertainty based on proper scoring rules, which are loss functions with the meaningful property that they incentivize the learner to predict ground-truth (conditional) probabilities. We assume two common representations of (epistemic) uncertainty, namely, in terms of a credal set, i.e. a set of probability …

abstract applications arxiv cs.lg functions loss machine machine learning novel paper property quantification representation rules safety safety-critical scoring stat.ml type uncertainty

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Scientist

@ ITE Management | New York City, United States