all AI news
Faithful and Robust Local Interpretability for Textual Predictions
April 9, 2024, 4:44 a.m. | Gianluigi Lopardo, Frederic Precioso, Damien Garreau
cs.LG updates on arXiv.org arxiv.org
Abstract: Interpretability is essential for machine learning models to be trusted and deployed in critical domains. However, existing methods for interpreting text models are often complex, lack mathematical foundations, and their performance is not guaranteed. In this paper, we propose FRED (Faithful and Robust Explainer for textual Documents), a novel method for interpreting predictions over text. FRED offers three key insights to explain a model prediction: (1) it identifies the minimal set of words in a …
abstract arxiv cs.cl cs.lg documents domains explainer however interpretability machine machine learning machine learning models paper performance predictions robust stat.ml text textual type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne