all AI news
Universal Prompt Tuning for Graph Neural Networks
April 11, 2024, 4:42 a.m. | Taoran Fang, Yunchao Zhang, Yang Yang, Chunping Wang, Lei Chen
cs.LG updates on arXiv.org arxiv.org
Abstract: In recent years, prompt tuning has sparked a research surge in adapting pre-trained models. Unlike the unified pre-training strategy employed in the language field, the graph field exhibits diverse pre-training strategies, posing challenges in designing appropriate prompt-based tuning methods for graph neural networks. While some pioneering work has devised specialized prompting functions for models that employ edge prediction as their pre-training tasks, these methods are limited to specific pre-trained GNN models and lack broader applicability. …
abstract arxiv challenges cs.ai cs.lg designing diverse graph graph neural networks language networks neural networks pre-trained models pre-training prompt prompt tuning research strategies strategy the graph training type universal work
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Robotics Technician - 3rd Shift
@ GXO Logistics | Perris, CA, US, 92571