Web: http://arxiv.org/abs/2106.09385

May 5, 2022, 1:10 a.m. | Aditya Singh, Alessandro Bay, Biswa Sengupta, Andrea Mirabile

cs.CV updates on arXiv.org arxiv.org

Deep neural networks have been shown to be highly miscalibrated. often they
tend to be overconfident in their predictions. It poses a significant challenge
for safety-critical systems to utilise deep neural networks (DNNs), reliably.
Many recently proposed approaches to mitigate this have demonstrated
substantial progress in improving DNN calibration. However, they hardly touch
upon refinement, which historically has been an essential aspect of
calibration. Refinement indicates separability of a network's correct and
incorrect predictions. This paper presents a theoretically and …

arxiv deep deep neural network impact network neural neural network on regularization

More from arxiv.org / cs.CV updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California