April 24, 2023, 12:45 a.m. | Przemyslaw A. Grabowicz, Nicholas Perello, Kenta Takatsu

cs.LG updates on arXiv.org arxiv.org

Supervised learning systems are trained using historical data and, if the
data was tainted by discrimination, they may unintentionally learn to
discriminate against protected groups. We propose that fair learning methods,
despite training on potentially discriminatory datasets, shall perform well on
fair test datasets. Such dataset shifts crystallize application scenarios for
specific fair learning methods. For instance, the removal of direct
discrimination can be represented as a particular dataset shift problem. For
this scenario, we propose a learning method that …

application arxiv data dataset datasets discrimination error fair historical data learn shift supervised learning systems test test datasets training training data

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Engineer - Takealot Group (Takealot.com | Superbalist.com | Mr D Food)

@ takealot.com | Cape Town