April 17, 2023, 8:02 p.m. | Indro Spinelli, Michele Guerra, Filippo Maria Bianchi, Simone Scardapane

cs.LG updates on arXiv.org arxiv.org

Subgraph-enhanced graph neural networks (SGNN) can increase the expressive
power of the standard message-passing framework. This model family represents
each graph as a collection of subgraphs, generally extracted by random sampling
or with hand-crafted heuristics. Our key observation is that by selecting
"meaningful" subgraphs, besides improving the expressivity of a GNN, it is also
possible to obtain interpretable results. For this purpose, we introduce a
novel framework that jointly predicts the class of the graph and a set of
explanatory …

arxiv collection family framework graph graph neural networks heuristics interpretability networks neural networks novel observation power random sampling set standard stochastic

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Scientist, gTech Ads

@ Google | Mexico City, CDMX, Mexico

Lead, Data Analytics Operations

@ Zocdoc | Pune, Maharashtra, India