Nov. 3, 2022, 1:11 a.m. | Haris Mansoor, Sarwan Ali, Shafiq Alam, Muhammad Asad Khan, Umair ul Hassan, Imdadullah Khan

cs.LG updates on arXiv.org arxiv.org

Analysis of the fairness of machine learning (ML) algorithms recently
attracted many researchers' interest. Most ML methods show bias toward
protected groups, which limits the applicability of ML models in many
applications like crime rate prediction etc. Since the data may have missing
values which, if not appropriately handled, are known to further harmfully
affect fairness. Many imputation methods are proposed to deal with missing
data. However, the effect of missing data imputation on fairness is not studied
well. In …

accuracy arxiv classifiers data fairness graph impact imputation node

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Data Engineering Manager

@ Microsoft | Redmond, Washington, United States

Machine Learning Engineer

@ Apple | San Diego, California, United States