March 28, 2024, 4:42 a.m. | Xianli Zeng, Guang Cheng, Edgar Dobriban

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.18216v1 Announce Type: cross
Abstract: Mitigating the disparate impact of statistical machine learning methods is crucial for ensuring fairness. While extensive research aims to reduce disparity, the effect of using a \emph{finite dataset} -- as opposed to the entire population -- remains unclear. This paper explores the statistical foundations of fair binary classification with two protected groups, focusing on controlling demographic disparity, defined as the difference in acceptance rates between the groups. Although fairness may come at the cost of …

abstract arxiv binary classification cs.cy cs.lg dataset fair fairness impact machine machine learning math.st minimax paper population reduce research statistical stat.ml stat.th type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer - AWS

@ 3Pillar Global | Costa Rica

Cost Controller/ Data Analyst - India

@ John Cockerill | Mumbai, India, India, India