all AI news
A Self-explaining Neural Architecture for Generalizable Concept Learning
May 2, 2024, 4:42 a.m. | Sanchit Sinha, Guangzhi Xiong, Aidong Zhang
cs.LG updates on arXiv.org arxiv.org
Abstract: With the wide proliferation of Deep Neural Networks in high-stake applications, there is a growing demand for explainability behind their decision-making process. Concept learning models attempt to learn high-level 'concepts' - abstract entities that align with human understanding, and thus provide interpretability to DNN architectures. However, in this paper, we demonstrate that present SOTA concept learning approaches suffer from two major problems - lack of concept fidelity wherein the models fail to learn consistent concepts …
abstract applications architecture architectures arxiv concept concepts cs.lg decision demand dnn explainability however human interpretability learn making networks neural networks process type understanding
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Global Clinical Data Manager
@ Warner Bros. Discovery | CRI - San Jose - San Jose (City Place)
Global Clinical Data Manager
@ Warner Bros. Discovery | COL - Cundinamarca - Bogotá (Colpatria)
Ingénieur Data Manager / Pau
@ Capgemini | Paris, FR