all AI news
Graph neural network outputs are almost surely asymptotically constant
March 7, 2024, 5:41 a.m. | Sam Adam-Day, Michael Benedikt, \.Ismail \.Ilkan Ceylan, Ben Finkelshtein
cs.LG updates on arXiv.org arxiv.org
Abstract: Graph neural networks (GNNs) are the predominant architectures for a variety of learning tasks on graphs. We present a new angle on the expressive power of GNNs by studying how the predictions of a GNN probabilistic classifier evolve as we apply it on larger graphs drawn from some random graph model. We show that the output converges to a constant function, which upper-bounds what these classifiers can express uniformly. This convergence phenomenon applies to a …
abstract apply architectures arxiv classifier cs.lg cs.lo gnn gnns graph graph neural network graph neural networks graphs network networks neural network neural networks power predictions studying tasks type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Data Engineer (m/f/d)
@ Project A Ventures | Berlin, Germany
Principle Research Scientist
@ Analog Devices | US, MA, Boston