all AI news
MaxK-GNN: Towards Theoretical Speed Limits for Accelerating Graph Neural Networks Training
Feb. 23, 2024, 5:43 a.m. | Hongwu Peng, Xi Xie, Kaustubh Shivdikar, MD Amit Hasan, Jiahui Zhao, Shaoyi Huang, Omer Khan, David Kaeli, Caiwen Ding
cs.LG updates on arXiv.org arxiv.org
Abstract: In the acceleration of deep neural network training, the GPU has become the mainstream platform. GPUs face substantial challenges on GNNs, such as workload imbalance and memory access irregularities, leading to underutilized hardware. Existing solutions such as PyG, DGL with cuSPARSE, and GNNAdvisor frameworks partially address these challenges but memory traffic is still significant.
We argue that drastic performance improvements can only be achieved by the vertical optimization of algorithm and system innovations, rather than …
abstract arxiv become challenges cs.ai cs.dc cs.lg deep neural network dgl face frameworks gnn gnns gpu gpus graph graph neural networks hardware memory network networks network training neural network neural networks platform solutions speed training type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Principal, Product Strategy Operations, Cloud Data Analytics
@ Google | Sunnyvale, CA, USA; Austin, TX, USA
Data Scientist - HR BU
@ ServiceNow | Hyderabad, India