April 23, 2024, 4:42 a.m. | Junqi Jiang, Francesco Leofante, Antonio Rago, Francesca Toni

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.13736v1 Announce Type: new
Abstract: Counterfactual Explanations (CEs) have emerged as a major paradigm in explainable AI research, providing recourse recommendations for users affected by the decisions of machine learning models. However, when slight changes occur in the parameters of the underlying model, CEs found by existing methods often become invalid for the updated models. The literature lacks a way to certify deterministic robustness guarantees for CEs under model changes, in that existing methods to improve CEs' robustness are heuristic, …

abstract abstractions ai research arxiv become ces counterfactual cs.ai cs.lg decisions explainable ai found however interval machine machine learning machine learning models major paradigm parameters recommendations research robust type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York