all AI news
Provably Robust and Plausible Counterfactual Explanations for Neural Networks via Robust Optimisation
April 5, 2024, 4:42 a.m. | Junqi Jiang, Jianglin Lan, Francesco Leofante, Antonio Rago, Francesca Toni
cs.LG updates on arXiv.org arxiv.org
Abstract: Counterfactual Explanations (CEs) have received increasing interest as a major methodology for explaining neural network classifiers. Usually, CEs for an input-output pair are defined as data points with minimum distance to the input that are classified with a different label than the output. To tackle the established problem that CEs are easily invalidated when model parameters are updated (e.g. retrained), studies have proposed ways to certify the robustness of CEs under model parameter changes bounded …
abstract arxiv ces classifiers counterfactual cs.ai cs.lg data input-output major methodology network networks neural network neural networks optimisation robust type via
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US