all AI news
Exploring the Trade-off Between Model Performance and Explanation Plausibility of Text Classifiers Using Human Rationales
April 5, 2024, 4:42 a.m. | Lucas E. Resck, Marcos M. Raimundo, Jorge Poco
cs.LG updates on arXiv.org arxiv.org
Abstract: Saliency post-hoc explainability methods are important tools for understanding increasingly complex NLP models. While these methods can reflect the model's reasoning, they may not align with human intuition, making the explanations not plausible. In this work, we present a methodology for incorporating rationales, which are text annotations explaining human decisions, into text classification models. This incorporation enhances the plausibility of post-hoc explanations while preserving their faithfulness. Our approach is agnostic to model architectures and explainability …
abstract arxiv classifiers cs.ai cs.cl cs.lg explainability human intuition making methodology nlp nlp models performance reasoning text tools trade trade-off type understanding work
More from arxiv.org / cs.LG updates on arXiv.org
Testable Learning with Distribution Shift
1 day, 4 hours ago |
arxiv.org
Quantum circuit synthesis with diffusion models
1 day, 4 hours ago |
arxiv.org
Fitness Approximation through Machine Learning
1 day, 4 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Senior Associate, Data and Analytics
@ Publicis Groupe | New York City, United States