all AI news
Empowering GNNs with Fine-grained Communication-Computation Pipelining on Multi-GPU Platforms. (arXiv:2209.06800v1 [cs.DC])
cs.LG updates on arXiv.org arxiv.org
The increasing size of input graphs for graph neural networks (GNNs)
highlights the demand for using multi-GPU platforms. However, existing
multi-GPU GNN solutions suffer from inferior performance due to imbalanced
computation and inefficient communication. To this end, we propose MGG, a novel
system design to accelerate GNNs on multi-GPU platforms via a GPU-centric
software pipeline. MGG explores the potential of hiding remote memory access
latency in GNN workloads through fine-grained computation-communication
pipelining. Specifically, MGG introduces a pipeline-aware workload management
strategy …
arxiv communication computation fine-grained gnns gpu multi-gpu platforms