March 28, 2024, 4:41 a.m. | Yongyi Yang, Jiaming Yang, Wei Hu, Micha{\l} Derezi\'nski

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.18142v1 Announce Type: new
Abstract: As a variant of Graph Neural Networks (GNNs), Unfolded GNNs offer enhanced interpretability and flexibility over traditional designs. Nevertheless, they still suffer from scalability challenges when it comes to the training cost. Although many methods have been proposed to address the scalability issues, they mostly focus on per-iteration efficiency, without worst-case convergence guarantees. Moreover, those methods typically add components to or modify the original model, thus possibly breaking the interpretability of Unfolded GNNs. In this …

abstract algorithm arxiv challenges cost cs.lg designs efficiency flexibility gnns graph graph neural networks interpretability networks neural networks scalability training type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Global Data Architect, AVP - State Street Global Advisors

@ State Street | Boston, Massachusetts

Data Engineer

@ NTT DATA | Pune, MH, IN