March 20, 2024, 4:43 a.m. | Eugene Ku, Swetha Arunraj

cs.LG updates on arXiv.org arxiv.org

arXiv:2312.11730v3 Announce Type: replace
Abstract: Graph Neural Networks are notorious for its memory consumption. A recent Transformer-based GNN called Graph Transformer is shown to obtain superior performances when long range dependencies exist. However, combining graph data and Transformer architecture led to a combinationally worse memory issue. We propose a novel version of "edge regularization technique" that alleviates the need for Positional Encoding and ultimately alleviate GT's out of memory issue. We observe that it is not clear whether having an …

abstract architecture arxiv attention consumption cs.lg data dependencies edge gnn graph graph data graph neural networks however issue memory memory consumption networks neural networks novel performances transformer transformer architecture type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York