all AI news
On Calibrated Model Uncertainty in Deep Learning. (arXiv:2206.07795v1 [cs.LG])
Web: http://arxiv.org/abs/2206.07795
June 17, 2022, 1:10 a.m. | Biraja Ghoshal, Allan Tucker
cs.LG updates on arXiv.org arxiv.org
Estimated uncertainty by approximate posteriors in Bayesian neural networks
are prone to miscalibration, which leads to overconfident predictions in
critical tasks that have a clear asymmetric cost or significant losses. Here,
we extend the approximate inference for the loss-calibrated Bayesian framework
to dropweights based Bayesian neural networks by maximising expected utility
over a model posterior to calibrate uncertainty in deep learning. Furthermore,
we show that decisions informed by loss-calibrated uncertainty can improve
diagnostic performance to a greater extent than straightforward …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
Data Science Intern
@ NannyML | Remote
Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY