March 5, 2024, 2:41 p.m. | Junxian Li, Bin Shi, Erfei Cui, Hua Wei, Qinghua Zheng

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.01079v1 Announce Type: new
Abstract: We study the challenging problem for inference tasks on large-scale graph datasets of Graph Neural Networks: huge time and memory consumption, and try to overcome it by reducing reliance on graph structure. Even though distilling graph knowledge to student MLP is an excellent idea, it faces two major problems of positional information loss and low generalization. To solve the problems, we propose a new three-stage multitask distillation framework. In detail, we use Positional Encoding to …

abstract arxiv consumption cs.ai cs.lg datasets distillation framework graph graph neural networks inference information knowledge memory memory consumption mlp networks neural networks reliance scale stage study tasks teaching type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Applied Scientist

@ Microsoft | Redmond, Washington, United States

Data Analyst / Action Officer

@ OASYS, INC. | OASYS, INC., Pratt Avenue Northwest, Huntsville, AL, United States