all AI news
Attention cannot be an Explanation. (arXiv:2201.11194v1 [cs.HC])
Web: http://arxiv.org/abs/2201.11194
Jan. 28, 2022, 2:10 a.m. | Arjun R Akula, Song-Chun Zhu
cs.LG updates on arXiv.org arxiv.org
Attention based explanations (viz. saliency maps), by providing
interpretability to black box models such as deep neural networks, are assumed
to improve human trust and reliance in the underlying models. Recently, it has
been shown that attention weights are frequently uncorrelated with
gradient-based measures of feature importance. Motivated by this, we ask a
follow-up question: "Assuming that we only consider the tasks where attention
weights correlate well with feature importance, how effective are these
attention based explanations in increasing human …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Director, Data Science (Advocacy & Nonprofit)
@ Civis Analytics | Remote
Data Engineer
@ Rappi | [CO] Bogotá
Data Scientist V, Marketplaces Personalization (Remote)
@ ID.me | United States (U.S.)
Product OPs Data Analyst (Flex/Remote)
@ Scaleway | Paris
Big Data Engineer
@ Risk Focus | Riga, Riga, Latvia
Internship Program: Machine Learning Backend
@ Nextail | Remote job