Oct. 21, 2022, 1:13 a.m. | Sean Current, Yuntian He, Saket Gurukar, Srinivasan Parthasarathy

cs.LG updates on arXiv.org arxiv.org

As machine learning becomes more widely adopted across domains, it is
critical that researchers and ML engineers think about the inherent biases in
the data that may be perpetuated by the model. Recently, many studies have
shown that such biases are also imbibed in Graph Neural Network (GNN) models if
the input graph is biased, potentially to the disadvantage of underserved and
underrepresented communities. In this work, we aim to mitigate the bias learned
by GNNs by jointly optimizing two …

arxiv fair graph link prediction prediction recommendation

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote