Aug. 31, 2022, 1:11 a.m. | Marco Scutari, Francesca Panero, Manuel Proissl

cs.LG updates on arXiv.org arxiv.org

In this paper we present a general framework for estimating regression models
subject to a user-defined level of fairness. We enforce fairness as a model
selection step in which we choose the value of a ridge penalty to control the
effect of sensitive attributes. We then estimate the parameters of the model
conditional on the chosen penalty value. Our proposal is mathematically simple,
with a solution that is partly in closed form, and produces estimates of the
regression coefficients that …

arxiv fairness ridge

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Machine Learning Engineer

@ Samsara | Canada - Remote