April 22, 2024, 4:43 a.m. | Alok Tripathy, Katherine Yelick, Aydin Buluc

cs.LG updates on arXiv.org arxiv.org

arXiv:2311.02909v3 Announce Type: replace
Abstract: Graph Neural Networks (GNNs) offer a compact and computationally efficient way to learn embeddings and classifications on graph data. GNN models are frequently large, making distributed minibatch training necessary.
The primary contribution of this paper is new methods for reducing communication in the sampling step for distributed GNN training. Here, we propose a matrix-based bulk sampling approach that expresses sampling as a sparse matrix multiplication (SpGEMM) and samples multiple minibatches at once. When the input …

abstract arxiv communication compact cs.dc cs.lg cs.pf data distributed embeddings gnn gnns graph graph data graph neural network graph neural networks learn making matrix network networks network training neural network neural networks paper sampling training type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US