March 11, 2024, 4:42 a.m. | Tamara Pereira, Erik Nascimento, Lucas E. Resck, Diego Mesquita, Amauri Souza

cs.LG updates on arXiv.org arxiv.org

arXiv:2303.10139v2 Announce Type: replace
Abstract: Explaining node predictions in graph neural networks (GNNs) often boils down to finding graph substructures that preserve predictions. Finding these structures usually implies back-propagating through the GNN, bonding the complexity (e.g., number of layers) of the GNN to the cost of explaining it. This naturally begs the question: Can we break this bond by explaining a simpler surrogate GNN? To answer the question, we propose Distill n' Explain (DnX). First, DnX learns a surrogate GNN …

abstract arxiv complexity cost cs.ai cs.lg gnn gnns graph graph neural networks networks neural networks node predictions simple through type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist

@ Meta | Menlo Park, CA

Principal Data Scientist

@ Mastercard | O'Fallon, Missouri (Main Campus)