March 26, 2024, 4:43 a.m. | Kudaibergen Abutalip, Numan Saeed, Ikboljon Sobirov, Vincent Andrearczyk, Adrien Depeursinge, Mohammad Yaqub

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.16594v1 Announce Type: cross
Abstract: Deploying deep learning (DL) models in medical applications relies on predictive performance and other critical factors, such as conveying trustworthy predictive uncertainty. Uncertainty estimation (UE) methods provide potential solutions for evaluating prediction reliability and improving the model confidence calibration. Despite increasing interest in UE, challenges persist, such as the need for explicit methods to capture aleatoric uncertainty and align uncertainty estimates with real-life disagreements among domain experts. This paper proposes an Expert Disagreement-Guided Uncertainty Estimation …

abstract applications arxiv challenges confidence cs.cv cs.lg deep learning eess.iv expert image improving medical performance prediction predictive reliability segmentation solutions trustworthy type uncertainty

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Director, Clinical Data Science

@ Aura | Remote USA

Research Scientist, AI (PhD)

@ Meta | Menlo Park, CA | New York City