March 27, 2024, 4:42 a.m. | Huizhe Zhang, Jintang Li, Liang Chen, Zibin Zheng

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.17656v1 Announce Type: cross
Abstract: Graph Transformers (GTs) with powerful representation learning ability make a huge success in wide range of graph tasks. However, the costs behind outstanding performances of GTs are higher energy consumption and computational overhead. The complex structure and quadratic complexity during attention calculation in vanilla transformer seriously hinder its scalability on the large-scale graph data. Though existing methods have made strides in simplifying combinations among blocks or attention-learning paradigm to improve GTs' efficiency, a series of …

abstract arxiv attention complexity computational consumption costs cs.ai cs.lg cs.ne energy graph hinder however performances representation representation learning saving success tasks transformer transformers type vanilla transformer

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Machine Learning Engineer - Sr. Consultant level

@ Visa | Bellevue, WA, United States