March 27, 2024, 4:42 a.m. | Huizhe Zhang, Jintang Li, Liang Chen, Zibin Zheng

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.17656v1 Announce Type: cross
Abstract: Graph Transformers (GTs) with powerful representation learning ability make a huge success in wide range of graph tasks. However, the costs behind outstanding performances of GTs are higher energy consumption and computational overhead. The complex structure and quadratic complexity during attention calculation in vanilla transformer seriously hinder its scalability on the large-scale graph data. Though existing methods have made strides in simplifying combinations among blocks or attention-learning paradigm to improve GTs' efficiency, a series of …

abstract arxiv attention complexity computational consumption costs cs.ai cs.lg cs.ne energy graph hinder however performances representation representation learning saving success tasks transformer transformers type vanilla transformer

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York