all AI news
FALE: Fairness-Aware ALE Plots for Auditing Bias in Subgroups
April 30, 2024, 4:42 a.m. | Giorgos Giannopoulos, Dimitris Sacharidis, Nikolas Theologitis, Loukas Kavouras, Ioannis Emiris
cs.LG updates on arXiv.org arxiv.org
Abstract: Fairness is steadily becoming a crucial requirement of Machine Learning (ML) systems. A particularly important notion is subgroup fairness, i.e., fairness in subgroups of individuals that are defined by more than one attributes. Identifying bias in subgroups can become both computationally challenging, as well as problematic with respect to comprehensibility and intuitiveness of the finding to end users. In this work we focus on the latter aspects; we propose an explainability method tailored to identifying …
abstract ale arxiv become bias cs.lg fairness machine machine learning notion plots subgroups systems type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US