Web: http://arxiv.org/abs/2201.08549

Jan. 24, 2022, 2:10 a.m. | O. Deniz Kose, Yanning Shen

cs.LG updates on arXiv.org arxiv.org

Node representation learning has demonstrated its efficacy for various
applications on graphs, which leads to increasing attention towards the area.
However, fairness is a largely under-explored territory within the field, which
may lead to biased results towards underrepresented groups in ensuing tasks. To
this end, this work theoretically explains the sources of bias in node
representations obtained via Graph Neural Networks (GNNs). Our analysis reveals
that both nodal features and graph structure lead to bias in the obtained
representations. Building …

arxiv augmentation data learning

More from arxiv.org / cs.LG updates on arXiv.org

Data Scientist

@ Fluent, LLC | Boca Raton, Florida, United States

Big Data ETL Engineer

@ Binance.US | Vancouver

Data Scientist / Data Engineer

@ Kin + Carta | Chicago

Data Engineer

@ Craft | Warsaw, Masovian Voivodeship, Poland

Senior Manager, Data Analytics Audit

@ Affirm | Remote US

Data Scientist - Nationwide Opportunities, AWS Professional Services

@ Amazon.com | US, NC, Virtual Location - N Carolina