all AI news
Distributed Matrix-Based Sampling for Graph Neural Network Training
April 22, 2024, 4:43 a.m. | Alok Tripathy, Katherine Yelick, Aydin Buluc
cs.LG updates on arXiv.org arxiv.org
Abstract: Graph Neural Networks (GNNs) offer a compact and computationally efficient way to learn embeddings and classifications on graph data. GNN models are frequently large, making distributed minibatch training necessary.
The primary contribution of this paper is new methods for reducing communication in the sampling step for distributed GNN training. Here, we propose a matrix-based bulk sampling approach that expresses sampling as a sparse matrix multiplication (SpGEMM) and samples multiple minibatches at once. When the input …
abstract arxiv communication compact cs.dc cs.lg cs.pf data distributed embeddings gnn gnns graph graph data graph neural network graph neural networks learn making matrix network networks network training neural network neural networks paper sampling training type
More from arxiv.org / cs.LG updates on arXiv.org
The Perception-Robustness Tradeoff in Deterministic Image Restoration
2 days, 21 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne