all AI news
FairSIN: Achieving Fairness in Graph Neural Networks through Sensitive Information Neutralization
March 20, 2024, 4:41 a.m. | Cheng Yang, Jixi Liu, Yunhe Yan, Chuan Shi
cs.LG updates on arXiv.org arxiv.org
Abstract: Despite the remarkable success of graph neural networks (GNNs) in modeling graph-structured data, like other machine learning models, GNNs are also susceptible to making biased predictions based on sensitive attributes, such as race and gender. For fairness consideration, recent state-of-the-art (SOTA) methods propose to filter out sensitive information from inputs or representations, e.g., edge dropping or feature masking. However, we argue that such filtering-based strategies may also filter out some non-sensitive feature information, leading to …
abstract art arxiv cs.cy cs.lg data fairness gender gnns graph graph neural networks information machine machine learning machine learning models making modeling networks neural networks predictions race race and gender sota state structured data success through type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Global Data Architect, AVP - State Street Global Advisors
@ State Street | Boston, Massachusetts
Data Engineer
@ NTT DATA | Pune, MH, IN