Sept. 15, 2022, 1:11 a.m. | Yuke Wang, Boyuan Feng, Zheng Wang, Tong Geng, Kevin Barker, Ang Li, Yufei Ding

cs.LG updates on arXiv.org arxiv.org

The increasing size of input graphs for graph neural networks (GNNs)
highlights the demand for using multi-GPU platforms. However, existing
multi-GPU GNN solutions suffer from inferior performance due to imbalanced
computation and inefficient communication. To this end, we propose MGG, a novel
system design to accelerate GNNs on multi-GPU platforms via a GPU-centric
software pipeline. MGG explores the potential of hiding remote memory access
latency in GNN workloads through fine-grained computation-communication
pipelining. Specifically, MGG introduces a pipeline-aware workload management
strategy …

arxiv communication computation fine-grained gnns gpu multi-gpu platforms

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Intelligence Analyst

@ Rappi | COL-Bogotá

Applied Scientist II

@ Microsoft | Redmond, Washington, United States