Feb. 22, 2024, 5:41 a.m. | Yuchen Yan, Peiyan Zhang, Zheng Fang, Qingqing Long

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.13556v1 Announce Type: new
Abstract: The "Graph pre-training and fine-tuning" paradigm has significantly improved Graph Neural Networks(GNNs) by capturing general knowledge without manual annotations for downstream tasks. However, due to the immense gap of data and tasks between the pre-training and fine-tuning stages, the model performance is still limited. Inspired by prompt fine-tuning in Natural Language Processing(NLP), many endeavors have been made to bridge the gap in graph domain. But existing methods simply reformulate the form of fine-tuning tasks to …

abstract alignment annotations arxiv cs.ai cs.lg data fine-tuning gap general gnns graph graph neural networks inductive knowledge networks neural networks paradigm perspective pre-training prompt tasks training type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Data Engineering Manager

@ Microsoft | Redmond, Washington, United States

Machine Learning Engineer

@ Apple | San Diego, California, United States