April 11, 2024, 4:42 a.m. | Taoran Fang, Yunchao Zhang, Yang Yang, Chunping Wang, Lei Chen

cs.LG updates on arXiv.org arxiv.org

arXiv:2209.15240v5 Announce Type: replace
Abstract: In recent years, prompt tuning has sparked a research surge in adapting pre-trained models. Unlike the unified pre-training strategy employed in the language field, the graph field exhibits diverse pre-training strategies, posing challenges in designing appropriate prompt-based tuning methods for graph neural networks. While some pioneering work has devised specialized prompting functions for models that employ edge prediction as their pre-training tasks, these methods are limited to specific pre-trained GNN models and lack broader applicability. …

abstract arxiv challenges cs.ai cs.lg designing diverse graph graph neural networks language networks neural networks pre-trained models pre-training prompt prompt tuning research strategies strategy the graph training type universal work

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Robotics Technician - 3rd Shift

@ GXO Logistics | Perris, CA, US, 92571