April 5, 2024, 4:42 a.m. | Lucas E. Resck, Marcos M. Raimundo, Jorge Poco

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.03098v1 Announce Type: cross
Abstract: Saliency post-hoc explainability methods are important tools for understanding increasingly complex NLP models. While these methods can reflect the model's reasoning, they may not align with human intuition, making the explanations not plausible. In this work, we present a methodology for incorporating rationales, which are text annotations explaining human decisions, into text classification models. This incorporation enhances the plausibility of post-hoc explanations while preserving their faithfulness. Our approach is agnostic to model architectures and explainability …

abstract arxiv classifiers cs.ai cs.cl cs.lg explainability human intuition making methodology nlp nlp models performance reasoning text tools trade trade-off type understanding work

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Senior Associate, Data and Analytics

@ Publicis Groupe | New York City, United States