April 15, 2024, 4:42 a.m. | Khadija Zanna, Akane Sano

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.08230v1 Announce Type: new
Abstract: This paper considers the need for generalizable bias mitigation techniques in machine learning due to the growing concerns of fairness and discrimination in data-driven decision-making procedures across a range of industries. While many existing methods for mitigating bias in machine learning have succeeded in specific cases, they often lack generalizability and cannot be easily applied to different data types or models. Additionally, the trade-off between accuracy and fairness remains a fundamental tension in the field. …

abstract arxiv bias concerns cs.cy cs.lg data data-driven decision discrimination dropout fairness industries machine machine learning machine learning models making monte-carlo multi-task learning paper pareto performance type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer

@ Samsara | Canada - Remote

Machine Learning & Data Engineer - Consultant

@ Arcadis | Bengaluru, Karnataka, India