April 5, 2024, 4:41 a.m. | Arjun Subramonian, Jian Kang, Yizhou Sun

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.03139v1 Announce Type: new
Abstract: Graph Neural Networks (GNNs) often perform better for high-degree nodes than low-degree nodes on node classification tasks. This degree bias can reinforce social marginalization by, e.g., sidelining authors of lowly-cited papers when predicting paper topics in citation networks. While researchers have proposed numerous hypotheses for why GNN degree bias occurs, we find via a survey of 38 degree bias papers that these hypotheses are often not rigorously validated, and can even be contradictory. Thus, we …

abstract arxiv authors bias classification cs.lg cs.si gnns graph graph neural networks insights low networks neural networks node nodes paper papers reinforce researchers social tasks topics type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Machine Learning Engineer

@ Apple | Sunnyvale, California, United States