Web: http://arxiv.org/abs/2201.11596

Jan. 28, 2022, 2:11 a.m. | Sean Current, Yuntian He, Saket Gurukar, Srinivasan Parthasarathy

cs.LG updates on arXiv.org arxiv.org

As machine learning becomes more widely adopted across domains, it is
critical that researchers and ML engineers think about the inherent biases in
the data that may be perpetuated by the model. Recently, many studies have
shown that such biases are also imbibed in Graph Neural Network (GNN) models if
the input graph is biased. In this work, we aim to mitigate the bias learned by
GNNs through modifying the input graph. To that end, we propose FairMod, a Fair …

arxiv graph link prediction prediction

More from arxiv.org / cs.LG updates on arXiv.org

Senior Data Analyst

@ Fanatics Inc | Remote - New York

Data Engineer - Search

@ Cytora | United Kingdom - Remote

Product Manager, Technical - Data Infrastructure and Streaming

@ Nubank | Berlin

Postdoctoral Fellow: ML for autonomous materials discovery

@ Lawrence Berkeley National Lab | Berkeley, CA

Principal Data Scientist

@ Zuora | Remote

Data Engineer

@ Veeva Systems | Pennsylvania - Fort Washington