Jan. 17, 2022, 2:10 a.m. | Mitchell Plyler, Michael Green, Min Chi

cs.LG updates on arXiv.org arxiv.org

Rationales, snippets of extracted text that explain an inference, have
emerged as a popular framework for interpretable natural language processing
(NLP). Rationale models typically consist of two cooperating modules: a
selector and a classifier with the goal of maximizing the mutual information
(MMI) between the "selected" text and the document label. Despite their
promises, MMI-based methods often pick up on spurious text patterns and result
in models with nonsensical behaviors. In this work, we investigate whether
counterfactual data augmentation (CDA), …

arxiv making time

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst (CPS-GfK)

@ GfK | Bucharest

Consultant Data Analytics IT Digital Impulse - H/F

@ Talan | Paris, France

Data Analyst

@ Experian | Mumbai, India

Data Scientist

@ Novo Nordisk | Princeton, NJ, US

Data Architect IV

@ Millennium Corporation | United States