Jan. 7, 2022, 2:10 a.m. | Saloni Dash, Vineeth N Balasubramanian, Amit Sharma

cs.LG updates on arXiv.org arxiv.org

Counterfactual examples for an input -- perturbations that change specific
features but not others -- have been shown to be useful for evaluating bias of
machine learning models, e.g., against specific demographic groups. However,
generating counterfactual examples for images is non-trivial due to the
underlying causal structure on the various features of an image. To be
meaningful, generated perturbations need to satisfy constraints implied by the
causal model. We present a method for generating counterfactuals by
incorporating a structural causal …

arxiv bias cv perspective

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Management Associate

@ EcoVadis | Ebène, Mauritius

Senior Data Engineer

@ Telstra | Telstra ICC Bengaluru