all AI news
Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods. (arXiv:2211.08369v1 [cs.CL])
Nov. 16, 2022, 2:16 a.m. | Josip Jukić, Martin Tutek, Jan Šnajder
cs.CL updates on arXiv.org arxiv.org
A popular approach to unveiling the black box of neural NLP models is to
leverage saliency methods, which assign scalar importance scores to each input
component. A common practice for evaluating whether an interpretability method
is \textit{faithful} and \textit{plausible} has been to use
evaluation-by-agreement -- multiple methods agreeing on an explanation
increases its credibility. However, recent work has found that even saliency
methods have weak rank correlations and advocated for the use of alternative
diagnostic methods. In our work, we …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Scientist (m/f/x/d)
@ Symanto Research GmbH & Co. KG | Spain, Germany
Enterprise Data Architect
@ Pathward | Remote
Diagnostic Imaging Information Systems (DIIS) Technologist
@ Nova Scotia Health Authority | Halifax, NS, CA, B3K 6R8
Intern Data Scientist - Residual Value Risk Management (f/m/d)
@ BMW Group | Munich, DE
Analytics Engineering Manager
@ PlayStation Global | United Kingdom, London
Junior Insight Analyst (PR&Comms)
@ Signal AI | Lisbon, Lisbon, Portugal