April 22, 2024, 4:43 a.m. | Alok Tripathy, Katherine Yelick, Aydin Buluc

cs.LG updates on arXiv.org arxiv.org

arXiv:2311.02909v3 Announce Type: replace
Abstract: Graph Neural Networks (GNNs) offer a compact and computationally efficient way to learn embeddings and classifications on graph data. GNN models are frequently large, making distributed minibatch training necessary.
The primary contribution of this paper is new methods for reducing communication in the sampling step for distributed GNN training. Here, we propose a matrix-based bulk sampling approach that expresses sampling as a sparse matrix multiplication (SpGEMM) and samples multiple minibatches at once. When the input …

abstract arxiv communication compact cs.dc cs.lg cs.pf data distributed embeddings gnn gnns graph graph data graph neural network graph neural networks learn making matrix network networks network training neural network neural networks paper sampling training type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne