April 11, 2024, 4:42 a.m. | Taoran Fang, Yunchao Zhang, Yang Yang, Chunping Wang, Lei Chen

cs.LG updates on arXiv.org arxiv.org

arXiv:2209.15240v5 Announce Type: replace
Abstract: In recent years, prompt tuning has sparked a research surge in adapting pre-trained models. Unlike the unified pre-training strategy employed in the language field, the graph field exhibits diverse pre-training strategies, posing challenges in designing appropriate prompt-based tuning methods for graph neural networks. While some pioneering work has devised specialized prompting functions for models that employ edge prediction as their pre-training tasks, these methods are limited to specific pre-trained GNN models and lack broader applicability. …

abstract arxiv challenges cs.ai cs.lg designing diverse graph graph neural networks language networks neural networks pre-trained models pre-training prompt prompt tuning research strategies strategy the graph training type universal work

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US