April 23, 2024, 4:42 a.m. | Junqi Jiang, Francesco Leofante, Antonio Rago, Francesca Toni

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.13736v1 Announce Type: new
Abstract: Counterfactual Explanations (CEs) have emerged as a major paradigm in explainable AI research, providing recourse recommendations for users affected by the decisions of machine learning models. However, when slight changes occur in the parameters of the underlying model, CEs found by existing methods often become invalid for the updated models. The literature lacks a way to certify deterministic robustness guarantees for CEs under model changes, in that existing methods to improve CEs' robustness are heuristic, …

abstract abstractions ai research arxiv become ces counterfactual cs.ai cs.lg decisions explainable ai found however interval machine machine learning machine learning models major paradigm parameters recommendations research robust type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne