Aug. 17, 2022, 1:11 a.m. | Thang M. Pham, Trung Bui, Long Mai, Anh Nguyen

cs.CL updates on arXiv.org arxiv.org

A principle behind dozens of attribution methods is to take the prediction
difference between before-and-after an input feature (here, a token) is removed
as its attribution. A popular Input Marginalization (IM) method (Kim et al.,
2020) uses BERT to replace a token, yielding more plausible counterfactuals.
While Kim et al. (2020) reported that IM is effective, we find this conclusion
not convincing as the DeletionBERT metric used in their paper is biased towards
IM. Importantly, this bias exists in Deletion-based …

arxiv decisions language language models text

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst

@ SEAKR Engineering | Englewood, CO, United States

Data Analyst II

@ Postman | Bengaluru, India

Data Architect

@ FORSEVEN | Warwick, GB

Director, Data Science

@ Visa | Washington, DC, United States

Senior Manager, Data Science - Emerging ML

@ Capital One | McLean, VA