Web: http://arxiv.org/abs/2010.03933

Jan. 31, 2022, 2:11 a.m. | Zhenlong Xu (1), Ziqi Xu (1), Jixue Liu (1), Debo Cheng (1), Jiuyong Li (1), Lin Liu (1), Ke Wang (2) ((1) STEM, Univsersity of South Austrlia, Adelai

cs.LG updates on arXiv.org arxiv.org

The increasing application of machine learning techniques in everyday
decision-making processes has brought concerns about the fairness of
algorithmic decision-making. This paper concerns the problem of collider bias
which produces spurious associations in fairness assessment and develops
theorems to guide fairness assessment avoiding the collider bias. We consider a
real-world application of auditing a trained classifier by an audit agency. We
propose an unbiased assessment algorithm by utilising the developed theorems to
reduce collider biases in the assessment. Experiments and …

arxiv bias fairness

More from arxiv.org / cs.LG updates on arXiv.org

Data Scientist

@ Fluent, LLC | Boca Raton, Florida, United States

Big Data ETL Engineer

@ Binance.US | Vancouver

Data Scientist / Data Engineer

@ Kin + Carta | Chicago

Data Engineer

@ Craft | Warsaw, Masovian Voivodeship, Poland

Senior Manager, Data Analytics Audit

@ Affirm | Remote US

Data Scientist - Nationwide Opportunities, AWS Professional Services

@ Amazon.com | US, NC, Virtual Location - N Carolina