March 6, 2024, 5:42 a.m. | Shaohua Fan, Xiao Wang, Chuan Shi, Peng Cui, Bai Wang

cs.LG updates on arXiv.org arxiv.org

arXiv:2111.10657v3 Announce Type: replace
Abstract: Graph Neural Networks (GNNs) are proposed without considering the agnostic distribution shifts between training and testing graphs, inducing the degeneration of the generalization ability of GNNs on Out-Of-Distribution (OOD) settings. The fundamental reason for such degeneration is that most GNNs are developed based on the I.I.D hypothesis. In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation. However, such spurious …

abstract arxiv cs.ai cs.lg distribution gnns graph graph neural networks graphs hypothesis networks neural networks reason testing training type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Reporting & Data Analytics Lead (Sizewell C)

@ EDF | London, GB

Data Analyst

@ Notable | San Mateo, CA