April 23, 2024, 4:42 a.m. | Paulo Yanez Sarmiento, Simon Witzke, Nadja Klein, Bernhard Y. Renard

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.14271v1 Announce Type: new
Abstract: Explainability is a key component in many applications involving deep neural networks (DNNs). However, current explanation methods for DNNs commonly leave it to the human observer to distinguish relevant explanations from spurious noise. This is not feasible anymore when going from easily human-accessible data such as images to more complex data such as genome sequences. To facilitate the accessibility of DNN outputs from such complex data and to increase explainability, we present a modification of …

abstract applications arxiv cs.lg current data explainability however human key layer networks neural networks noise propagation type wise

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Sr. Software Development Manager, AWS Neuron Machine Learning Distributed Training

@ Amazon.com | Cupertino, California, USA