all AI news
The Fairness of Credit Scoring Models. (arXiv:2205.10200v1 [stat.ML])
May 23, 2022, 1:11 a.m. | Christophe Hurlin, Christophe Pérignon, Sébastien Saurin
stat.ML updates on arXiv.org arxiv.org
In credit markets, screening algorithms aim to discriminate between good-type
and bad-type borrowers. However, when doing so, they also often discriminate
between individuals sharing a protected attribute (e.g. gender, age, racial
origin) and the rest of the population. In this paper, we show how (1) to test
whether there exists a statistically significant difference between protected
and unprotected groups, which we call lack of fairness and (2) to identify the
variables that cause the lack of fairness. We then use …
More from arxiv.org / stat.ML updates on arXiv.org
Jobs in AI, ML, Big Data
Senior ML Researcher - 3D Geometry Processing | 3D Shape Generation | 3D Mesh Data
@ Promaton | Europe
Data Architect
@ Western Digital | San Jose, CA, United States
Senior Data Scientist GenAI (m/w/d)
@ Deutsche Telekom | Bonn, Deutschland
Senior Data Engineer, Telco (Remote)
@ Lightci | Toronto, Ontario
Consultant Data Architect/Engineer H/F - Innovative Tech
@ Devoteam | Lyon, France
(Senior) ML Engineer / Software Engineer Machine Learning & AI (m/f/x) onsite or remote (in Germany or Austria)
@ Scalable GmbH | Wien, Germany