all AI news
Causality-Aware Local Interpretable Model-Agnostic Explanations
April 16, 2024, 4:45 a.m. | Martina Cinquini, Riccardo Guidotti
cs.LG updates on arXiv.org arxiv.org
Abstract: A main drawback of eXplainable Artificial Intelligence (XAI) approaches is the feature independence assumption, hindering the study of potential variable dependencies. This leads to approximating black box behaviors by analyzing the effects on randomly generated feature values that may rarely occur in the original samples. This paper addresses this issue by integrating causal knowledge in an XAI method to enhance transparency and enable users to assess the quality of the generated explanations. Specifically, we propose …
abstract artificial artificial intelligence arxiv black box box causality cs.ai cs.lg dependencies effects explainable artificial intelligence feature generated intelligence issue leads model-agnostic paper samples study type values xai
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Business Data Scientist, gTech Ads
@ Google | Mexico City, CDMX, Mexico
Lead, Data Analytics Operations
@ Zocdoc | Pune, Maharashtra, India