all AI news
LUCID: Exposing Algorithmic Bias through Inverse Design. (arXiv:2208.12786v1 [cs.LG])
Aug. 29, 2022, 1:11 a.m. | Carmen Mazijn, Carina Prunkl, Andres Algaba, Jan Danckaert, Vincent Ginis
cs.LG updates on arXiv.org arxiv.org
AI systems can create, propagate, support, and automate bias in
decision-making processes. To mitigate biased decisions, we both need to
understand the origin of the bias and define what it means for an algorithm to
make fair decisions. Most group fairness notions assess a model's equality of
outcome by computing statistical metrics on the outputs. We argue that these
output metrics encounter intrinsic obstacles and present a complementary
approach that aligns with the increasing focus on equality of treatment. By …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Machine Learning Engineer - Sr. Consultant level
@ Visa | Bellevue, WA, United States