June 29, 2022, 1:12 a.m. | Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster

cs.CL updates on arXiv.org arxiv.org

A common approach for testing fairness issues in text-based classifiers is
through the use of counterfactuals: does the classifier output change if a
sensitive attribute in the input is changed? Existing counterfactual generation
methods typically rely on wordlists or templates, producing simple
counterfactuals that don't take into account grammar, context, or subtle
sensitive attribute references, and could miss issues that the wordlist
creators had not considered. In this paper, we introduce a task for generating
counterfactuals that overcomes these shortcomings, …

arxiv fairness generation text text generation

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Data Operations Analyst

@ Workday | Poland, Warsaw

Reference Data Specialist - Operations Analyst

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India

Data Scientist (Redwood City)

@ Anomali | Redwood City, CA

Software Engineer, Database - Languages & Relational Technologies

@ YugabyteDB | United States (Remote) or Sunnyvale, CA

Data Analyst (m/f/d) Online Marketing

@ StepStone Group | Düsseldorf, Germany