Web: http://arxiv.org/abs/2201.11569

Jan. 28, 2022, 2:11 a.m. | Hendrik Schuff, Alon Jacovi, Heike Adel, Yoav Goldberg, Ngoc Thang Vu

cs.LG updates on arXiv.org arxiv.org

While a lot of research in explainable AI focuses on producing effective
explanations, less work is devoted to the question of how people understand and
interpret the explanation. In this work, we focus on this question through a
study of saliency-based explanations over textual data. Feature-attribution
explanations of text models aim to communicate which parts of the input text
were more influential than others towards the model decision. Many current
explanation methods, such as gradient-based or Shapley value-based methods,
provide …

arxiv human text

More from arxiv.org / cs.LG updates on arXiv.org

Senior Data Engineer

@ DAZN | Hammersmith, London, United Kingdom

Sr. Data Engineer, Growth

@ Netflix | Remote, United States

Data Engineer - Remote

@ Craft | Wrocław, Lower Silesian Voivodeship, Poland

Manager, Operations Data Science

@ Binance.US | Vancouver

Senior Machine Learning Researcher for Copilot

@ GitHub | Remote - Europe

Sr. Marketing Data Analyst

@ HoneyBook | San Francisco, CA