all AI news
SGHormer: An Energy-Saving Graph Transformer Driven by Spikes
March 27, 2024, 4:42 a.m. | Huizhe Zhang, Jintang Li, Liang Chen, Zibin Zheng
cs.LG updates on arXiv.org arxiv.org
Abstract: Graph Transformers (GTs) with powerful representation learning ability make a huge success in wide range of graph tasks. However, the costs behind outstanding performances of GTs are higher energy consumption and computational overhead. The complex structure and quadratic complexity during attention calculation in vanilla transformer seriously hinder its scalability on the large-scale graph data. Though existing methods have made strides in simplifying combinations among blocks or attention-learning paradigm to improve GTs' efficiency, a series of …
abstract arxiv attention complexity computational consumption costs cs.ai cs.lg cs.ne energy graph hinder however performances representation representation learning saving success tasks transformer transformers type vanilla transformer
More from arxiv.org / cs.LG updates on arXiv.org
Sliced Wasserstein with Random-Path Projecting Directions
2 days, 6 hours ago |
arxiv.org
Learning Extrinsic Dexterity with Parameterized Manipulation Primitives
2 days, 6 hours ago |
arxiv.org
The Un-Kidnappable Robot: Acoustic Localization of Sneaking People
2 days, 6 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York