all AI news
LUCID: Exposing Algorithmic Bias through Inverse Design. (arXiv:2208.12786v1 [cs.LG])
Aug. 29, 2022, 1:11 a.m. | Carmen Mazijn, Carina Prunkl, Andres Algaba, Jan Danckaert, Vincent Ginis
cs.LG updates on arXiv.org arxiv.org
AI systems can create, propagate, support, and automate bias in
decision-making processes. To mitigate biased decisions, we both need to
understand the origin of the bias and define what it means for an algorithm to
make fair decisions. Most group fairness notions assess a model's equality of
outcome by computing statistical metrics on the outputs. We argue that these
output metrics encounter intrinsic obstacles and present a complementary
approach that aligns with the increasing focus on equality of treatment. By …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US