all AI news
Better with Less: A Data-Active Perspective on Pre-Training Graph Neural Networks. (arXiv:2311.01038v1 [cs.LG])
cs.LG updates on arXiv.org arxiv.org
Pre-training on graph neural networks (GNNs) aims to learn transferable
knowledge for downstream tasks with unlabeled data, and it has recently become
an active research area. The success of graph pre-training models is often
attributed to the massive amount of input data. In this paper, however, we
identify the curse of big data phenomenon in graph pre-training: more training
data do not necessarily lead to better downstream performance. Motivated by
this observation, we propose a better-with-less framework for graph
pre-training: …
arxiv become data gnns graph graph neural networks knowledge learn massive networks neural networks paper perspective pre-training research success tasks training training models