April 24, 2023, 12:44 a.m. | Matthieu Cordonnier, Nicolas Keriven, Nicolas Tremblay, Samuel Vaiter

stat.ML updates on arXiv.org arxiv.org

We study the convergence of message passing graph neural networks on random
graph models to their continuous counterpart as the number of nodes tends to
infinity. Until now, this convergence was only known for architectures with
aggregation functions in the form of degree-normalized means. We extend such
results to a very large class of aggregation functions, that encompasses all
classically used message passing graph neural networks, such as attention-based
mesage passing or max convolutional message passing on top of
(degree-normalized) …

aggregation architectures arxiv attention continuous convergence graph graph neural networks graphs max networks neural networks random study

More from arxiv.org / stat.ML updates on arXiv.org

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Management Assistant

@ World Vision | Amman Office, Jordan

Cloud Data Engineer, Global Services Delivery, Google Cloud

@ Google | Buenos Aires, Argentina