April 16, 2024, 4:45 a.m. | Martina Cinquini, Riccardo Guidotti

cs.LG updates on arXiv.org arxiv.org

arXiv:2212.05256v3 Announce Type: replace-cross
Abstract: A main drawback of eXplainable Artificial Intelligence (XAI) approaches is the feature independence assumption, hindering the study of potential variable dependencies. This leads to approximating black box behaviors by analyzing the effects on randomly generated feature values that may rarely occur in the original samples. This paper addresses this issue by integrating causal knowledge in an XAI method to enhance transparency and enable users to assess the quality of the generated explanations. Specifically, we propose …

abstract artificial artificial intelligence arxiv black box box causality cs.ai cs.lg dependencies effects explainable artificial intelligence feature generated intelligence issue leads model-agnostic paper samples study type values xai

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US