Sept. 29, 2022, 1:15 a.m. | Zhengxuan Wu, Karel D'Oosterlinck, Atticus Geiger, Amir Zur, Christopher Potts

cs.CL updates on arXiv.org arxiv.org

Explainability methods for NLP systems encounter a version of the fundamental
problem of causal inference: for a given ground-truth input text, we never
truly observe the counterfactual texts necessary for isolating the causal
effects of model representations on outputs. In response, many explainability
methods make no use of counterfactual texts, assuming they will be unavailable.
In this paper, we show that robust causal explainability methods can be created
using approximate counterfactuals, which can be written by humans to
approximate a …

arxiv concept

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Data Engineer

@ Bosch Group | San Luis Potosí, Mexico

DATA Engineer (H/F)

@ Renault Group | FR REN RSAS - Le Plessis-Robinson (Siège)

Advisor, Data engineering

@ Desjardins | 1, Complexe Desjardins, Montréal

Data Engineer Intern

@ Getinge | Wayne, NJ, US

Software Engineer III- Java / Python / Pyspark / ETL

@ JPMorgan Chase & Co. | Jersey City, NJ, United States