all AI news
Introducing User Feedback-based Counterfactual Explanations (UFCE)
March 4, 2024, 5:41 a.m. | Muhammad Suffian, Jose M. Alonso-Moral, Alessandro Bogliolo
cs.LG updates on arXiv.org arxiv.org
Abstract: Machine learning models are widely used in real-world applications. However, their complexity makes it often challenging to interpret the rationale behind their decisions. Counterfactual explanations (CEs) have emerged as a viable solution for generating comprehensible explanations in eXplainable Artificial Intelligence (XAI). CE provides actionable information to users on how to achieve the desired outcome with minimal modifications to the input. However, current CE algorithms usually operate within the entire feature space when optimizing changes to …
abstract applications artificial artificial intelligence arxiv ces complexity counterfactual cs.ai cs.hc cs.lg decisions explainable artificial intelligence feedback information intelligence machine machine learning machine learning models solution type user feedback world xai
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US