all AI news
Combining Stochastic Explainers and Subgraph Neural Networks can Increase Expressivity and Interpretability. (arXiv:2304.07152v1 [cs.LG])
cs.LG updates on arXiv.org arxiv.org
Subgraph-enhanced graph neural networks (SGNN) can increase the expressive
power of the standard message-passing framework. This model family represents
each graph as a collection of subgraphs, generally extracted by random sampling
or with hand-crafted heuristics. Our key observation is that by selecting
"meaningful" subgraphs, besides improving the expressivity of a GNN, it is also
possible to obtain interpretable results. For this purpose, we introduce a
novel framework that jointly predicts the class of the graph and a set of
explanatory …
arxiv collection family framework graph graph neural networks heuristics interpretability networks neural networks novel observation power random sampling set standard stochastic