March 11, 2024, 4:42 a.m. | Aoqi Zuo, Yiqing Li, Susan Wei, Mingming Gong

cs.LG updates on arXiv.org arxiv.org

arXiv:2401.10632v2 Announce Type: replace
Abstract: Fair machine learning aims to prevent discrimination against individuals or sub-populations based on sensitive attributes such as gender and race. In recent years, causal inference methods have been increasingly used in fair machine learning to measure unfairness by causal effects. However, current methods assume that the true causal graph is given, which is often not true in real-world applications. To address this limitation, this paper proposes a framework for achieving causal fairness based on the …

abstract arxiv causal inference cs.lg current discrimination effects fair fairness gender graphs however inference machine machine learning optimization race type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Sr. Software Development Manager, AWS Neuron Machine Learning Distributed Training

@ Amazon.com | Cupertino, California, USA