Web: http://arxiv.org/abs/2201.12114

Jan. 31, 2022, 2:11 a.m. | Yibing Liu, Haoliang Li, Yangyang Guo, Chenqi Kong, Jing Li, Shiqi Wang

cs.LG updates on arXiv.org arxiv.org

Attention mechanisms are dominating the explainability of deep models. They
produce probability distributions over the input, which are widely deemed as
feature-importance indicators. However, in this paper, we find one critical
limitation in attention explanations: weakness in identifying the polarity of
feature impact. This would be somehow misleading -- features with higher
attention weights may not faithfully contribute to model predictions; instead,
they can impose suppression effects. With this finding, we reflect on the
explainability of current attention-based techniques, such …

arxiv attention explainability model test

More from arxiv.org / cs.LG updates on arXiv.org

Data Analytics and Technical support Lead

@ Coupa Software, Inc. | Bogota, Colombia

Data Science Manager

@ Vectra | San Jose, CA

Data Analyst Sr

@ Capco | Brazil - Sao Paulo

Data Scientist (NLP)

@ Builder.ai | London, England, United Kingdom - Remote

Senior Data Analyst

@ BuildZoom | Scottsdale, AZ/ San Francisco, CA/ Remote

Senior Research Scientist, Speech Recognition

@ SoundHound Inc. | Toronto, Canada