Nov. 5, 2023, 6:42 a.m. | Jiarong Xu, Renhong Huang, Xin Jiang, Yuxuan Cao, Carl Yang, Chunping Wang, Yang Yang

cs.LG updates on arXiv.org arxiv.org

Pre-training on graph neural networks (GNNs) aims to learn transferable
knowledge for downstream tasks with unlabeled data, and it has recently become
an active research area. The success of graph pre-training models is often
attributed to the massive amount of input data. In this paper, however, we
identify the curse of big data phenomenon in graph pre-training: more training
data do not necessarily lead to better downstream performance. Motivated by
this observation, we propose a better-with-less framework for graph
pre-training: …

arxiv become data gnns graph graph neural networks knowledge learn massive networks neural networks paper perspective pre-training research success tasks training training models

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US