all AI news
RSC: Accelerating Graph Neural Networks Training via Randomized Sparse Computations. (arXiv:2210.10737v1 [cs.LG])
cs.LG updates on arXiv.org arxiv.org
The training of graph neural networks (GNNs) is extremely time consuming
because sparse graph-based operations are hard to be accelerated by hardware.
Prior art explores trading off the computational precision to reduce the time
complexity via sampling-based approximation. Based on the idea, previous works
successfully accelerate the dense matrix based operations (e.g., convolution
and linear) with negligible accuracy drop. However, unlike dense matrices,
sparse matrices are stored in the irregular data format such that each
row/column may have different number …
arxiv graph graph neural networks networks neural networks rsc training