all AI news
Uncertainty Quantification for Gradient-based Explanations in Neural Networks
March 27, 2024, 4:41 a.m. | Mihir Mulye, Matias Valdenegro-Toro
cs.LG updates on arXiv.org arxiv.org
Abstract: Explanation methods help understand the reasons for a model's prediction. These methods are increasingly involved in model debugging, performance optimization, and gaining insights into the workings of a model. With such critical applications of these methods, it is imperative to measure the uncertainty associated with the explanations generated by these methods. In this paper, we propose a pipeline to ascertain the explanation uncertainty of neural networks by combining uncertainty estimation methods and explanation methods. We …
abstract applications arxiv cs.ai cs.lg debugging gradient insights networks neural networks optimization performance prediction quantification type uncertainty
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote