Feb. 20, 2024, 5:43 a.m. | Marcello Di Bello, Nicol\`o Cangiotti, Michele Loi

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.12062v1 Announce Type: cross
Abstract: Over the last ten years the literature in computer science and philosophy has formulated different criteria of algorithmic fairness. One of the most discussed, classification parity, requires that the erroneous classifications of a predictive algorithm occur with equal frequency for groups picked out by protected characteristics. Despite its intuitive appeal, classification parity has come under attack. Multiple scenarios can be imagined in which - intuitively - a predictive algorithm does not treat any individual unfairly, …

abstract algorithm algorithmic fairness arxiv classification computer computer science cs.ai cs.cy cs.ds cs.lg fairness literature philosophy predictive protection science type

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Engineer - Takealot Group (Takealot.com | Superbalist.com | Mr D Food)

@ takealot.com | Cape Town