May 11, 2022, 1:11 a.m. | Michael Neely, Stefan F. Schouten, Maurits Bleeker, Ana Lucic

cs.LG updates on arXiv.org arxiv.org

There has been significant debate in the NLP community about whether or not
attention weights can be used as an explanation - a mechanism for interpreting
how important each input token is for a particular prediction. The validity of
"attention as explanation" has so far been evaluated by computing the rank
correlation between attention-based explanations and existing feature
attribution explanations using LSTM-based models. In our work, we (i) compare
the rank correlation between five more recent feature attribution methods and …

artificial artificial intelligence arxiv evaluation explainable artificial intelligence intelligence language language processing natural natural language natural language processing processing

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Intelligence Analyst

@ Rappi | COL-Bogotá

Applied Scientist II

@ Microsoft | Redmond, Washington, United States