Feb. 28, 2024, 5:42 a.m. | Muhammad Faaiz Taufiq, Jean-Francois Ton, Yang Liu

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.17106v1 Announce Type: cross
Abstract: In machine learning fairness, training models which minimize disparity across different sensitive groups often leads to diminished accuracy, a phenomenon known as the fairness-accuracy trade-off. The severity of this trade-off fundamentally depends on dataset characteristics such as dataset imbalances or biases. Therefore using a uniform fairness requirement across datasets remains questionable and can often lead to models with substantially low utility. To address this, we present a computationally efficient approach to approximate the fairness-accuracy trade-off …

abstract accuracy arxiv biases cs.cy cs.lg data dataset fairness leads machine machine learning stat.ml trade trade-off training training models type uniform utility

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

#13721 - Data Engineer - AI Model Testing

@ Qualitest | Miami, Florida, United States

Elasticsearch Administrator

@ ManTech | 201BF - Customer Site, Chantilly, VA