Feb. 8, 2024, 5:43 a.m. | Yuhui Ding Antonio Orvieto Bobby He Thomas Hofmann

cs.LG updates on arXiv.org arxiv.org

Graph neural networks based on iterative one-hop message passing have been shown to struggle in harnessing the information from distant nodes effectively. Conversely, graph transformers allow each node to attend to all other nodes directly, but lack graph inductive bias and have to rely on ad-hoc positional encoding. In this paper, we propose a new architecture to reconcile these challenges. Our approach stems from the recent breakthroughs in long-range modeling provided by deep state-space models on sequential data: for a …

bias cs.lg cs.ne encoding filtering graph graph neural networks graph representation inductive information iterative networks neural networks node nodes paper positional encoding representation representation learning struggle the information transformers

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne