all AI news
Analyzing Explainer Robustness via Probabilistic Lipschitzness of Prediction Functions
April 17, 2024, 4:43 a.m. | Zulqarnain Khan, Davin Hill, Aria Masoomi, Joshua Bone, Jennifer Dy
cs.LG updates on arXiv.org arxiv.org
Abstract: Machine learning methods have significantly improved in their predictive capabilities, but at the same time they are becoming more complex and less transparent. As a result, explainers are often relied on to provide interpretability to these black-box prediction models. As crucial diagnostics tools, it is important that these explainers themselves are robust. In this paper we focus on one particular aspect of robustness, namely that an explainer should give similar explanations for similar data inputs. …
abstract arxiv box capabilities cs.lg diagnostics explainer functions interpretability machine machine learning prediction prediction models predictive robustness tools transparent type via
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US