April 5, 2024, 4:42 a.m. | Lucas E. Resck, Marcos M. Raimundo, Jorge Poco

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.03098v1 Announce Type: cross
Abstract: Saliency post-hoc explainability methods are important tools for understanding increasingly complex NLP models. While these methods can reflect the model's reasoning, they may not align with human intuition, making the explanations not plausible. In this work, we present a methodology for incorporating rationales, which are text annotations explaining human decisions, into text classification models. This incorporation enhances the plausibility of post-hoc explanations while preserving their faithfulness. Our approach is agnostic to model architectures and explainability …

abstract arxiv classifiers cs.ai cs.cl cs.lg explainability human intuition making methodology nlp nlp models performance reasoning text tools trade trade-off type understanding work

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

AI Engineering Manager

@ M47 Labs | Barcelona, Catalunya [Cataluña], Spain