all AI news
Double Trouble: How to not explain a text classifier's decisions using counterfactuals synthesized by masked language models?. (arXiv:2110.11929v4 [cs.CL] UPDATED)
Oct. 12, 2022, 1:17 a.m. | Thang M. Pham, Trung Bui, Long Mai, Anh Nguyen
cs.CL updates on arXiv.org arxiv.org
A principle behind dozens of attribution methods is to take the prediction
difference between before-and-after an input feature (here, a token) is removed
as its attribution. A popular Input Marginalization (IM) method (Kim et al.,
2020) uses BERT to replace a token, yielding more plausible counterfactuals.
While Kim et al. (2020) reported that IM is effective, we find this conclusion
not convincing as the DeletionBERT metric used in their paper is biased towards
IM. Importantly, this bias exists in Deletion-based …
arxiv classifier decisions language language models synthesized text
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
IT Commercial Data Analyst - ESO
@ National Grid | Warwick, GB, CV34 6DA
Stagiaire Data Analyst – Banque Privée - Juillet 2024
@ Rothschild & Co | Paris (Messine-29)
Operations Research Scientist I - Network Optimization Focus
@ CSX | Jacksonville, FL, United States
Machine Learning Operations Engineer
@ Intellectsoft | Baku, Baku, Azerbaijan - Remote
Data Analyst
@ Health Care Service Corporation | Richardson Texas HQ (1001 E. Lookout Drive)