Aug. 29, 2022, 1:11 a.m. | Carmen Mazijn, Carina Prunkl, Andres Algaba, Jan Danckaert, Vincent Ginis

cs.LG updates on arXiv.org arxiv.org

AI systems can create, propagate, support, and automate bias in
decision-making processes. To mitigate biased decisions, we both need to
understand the origin of the bias and define what it means for an algorithm to
make fair decisions. Most group fairness notions assess a model's equality of
outcome by computing statistical metrics on the outputs. We argue that these
output metrics encounter intrinsic obstacles and present a complementary
approach that aligns with the increasing focus on equality of treatment. By …

algorithmic bias arxiv bias design lg

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Machine Learning Engineer - Sr. Consultant level

@ Visa | Bellevue, WA, United States