March 21, 2024, 4:41 a.m. | Ningyi Liao, Zihao Yu, Siqiang Luo

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.13268v1 Announce Type: new
Abstract: Graph Neural Networks (GNNs) have shown promising performance in various graph learning tasks, but at the cost of resource-intensive computations. The primary overhead of GNN update stems from graph propagation and weight transformation, both involving operations on graph-scale matrices. Previous studies attempt to reduce the computational budget by leveraging graph-level or network-level sparsification techniques, resulting in downsized graph or weights. In this work, we propose Unifews, which unifies the two operations in an entry-wise manner …

abstract arxiv computational cost cs.db cs.lg gnn gnns graph graph learning graph neural network graph neural networks network networks neural network neural networks operations performance propagation reduce scale studies tasks transformation type update wise

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Intern Large Language Models Planning (f/m/x)

@ BMW Group | Munich, DE

Data Engineer Analytics

@ Meta | Menlo Park, CA | Remote, US