Oct. 20, 2022, 1:12 a.m. | Zirui Liu, Shengyuan Chen, Kaixiong Zhou, Daochen Zha, Xiao Huang, Xia Hu

cs.LG updates on arXiv.org arxiv.org

The training of graph neural networks (GNNs) is extremely time consuming
because sparse graph-based operations are hard to be accelerated by hardware.
Prior art explores trading off the computational precision to reduce the time
complexity via sampling-based approximation. Based on the idea, previous works
successfully accelerate the dense matrix based operations (e.g., convolution
and linear) with negligible accuracy drop. However, unlike dense matrices,
sparse matrices are stored in the irregular data format such that each
row/column may have different number …

arxiv graph graph neural networks networks neural networks rsc training

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Robotics Technician - 3rd Shift

@ GXO Logistics | Perris, CA, US, 92571