all AI news
AnchorGT: Efficient and Flexible Attention Architecture for Scalable Graph Transformers
May 7, 2024, 4:43 a.m. | Wenhao Zhu, Guojie Song, Liang Wang, Shaoguo Liu
cs.LG updates on arXiv.org arxiv.org
Abstract: Graph Transformers (GTs) have significantly advanced the field of graph representation learning by overcoming the limitations of message-passing graph neural networks (GNNs) and demonstrating promising performance and expressive power. However, the quadratic complexity of self-attention mechanism in GTs has limited their scalability, and previous approaches to address this issue often suffer from expressiveness degradation or lack of versatility. To address this issue, we propose AnchorGT, a novel attention architecture for GTs with global receptive field …
abstract advanced architecture arxiv attention complexity cs.lg gnns graph graph neural networks graph representation however limitations networks neural networks performance power representation representation learning scalability scalable self-attention transformers type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US