April 23, 2024, 4:43 a.m. | Hongyuan Zhang, Yanan Zhu, Xuelong Li

cs.LG updates on arXiv.org arxiv.org

arXiv:2304.10126v2 Announce Type: replace
Abstract: Graph neural networks (GNN) suffer from severe inefficiency. It is mainly caused by the exponential growth of node dependency with the increase of layers. It extremely limits the application of stochastic optimization algorithms so that the training of GNN is usually time-consuming. To address this problem, we propose to decouple a multi-layer GNN as multiple simple modules for more efficient training, which is comprised of classical forward training (FT)and designed backward training (BT). Under the …

abstract algorithms application arxiv cs.ai cs.lg gnn gnns graph graph neural networks growth multiple networks neural networks node optimization simple stochastic train training type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Codec Avatars Research Engineer

@ Meta | Pittsburgh, PA