April 17, 2024, 4:43 a.m. | Zulqarnain Khan, Davin Hill, Aria Masoomi, Joshua Bone, Jennifer Dy

cs.LG updates on arXiv.org arxiv.org

arXiv:2206.12481v3 Announce Type: replace
Abstract: Machine learning methods have significantly improved in their predictive capabilities, but at the same time they are becoming more complex and less transparent. As a result, explainers are often relied on to provide interpretability to these black-box prediction models. As crucial diagnostics tools, it is important that these explainers themselves are robust. In this paper we focus on one particular aspect of robustness, namely that an explainer should give similar explanations for similar data inputs. …

abstract arxiv box capabilities cs.lg diagnostics explainer functions interpretability machine machine learning prediction prediction models predictive robustness tools transparent type via

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

AI Engineering Manager

@ M47 Labs | Barcelona, Catalunya [Cataluña], Spain