all AI news
Distill n' Explain: explaining graph neural networks using simple surrogates
March 11, 2024, 4:42 a.m. | Tamara Pereira, Erik Nascimento, Lucas E. Resck, Diego Mesquita, Amauri Souza
cs.LG updates on arXiv.org arxiv.org
Abstract: Explaining node predictions in graph neural networks (GNNs) often boils down to finding graph substructures that preserve predictions. Finding these structures usually implies back-propagating through the GNN, bonding the complexity (e.g., number of layers) of the GNN to the cost of explaining it. This naturally begs the question: Can we break this bond by explaining a simpler surrogate GNN? To answer the question, we propose Distill n' Explain (DnX). First, DnX learns a surrogate GNN …
abstract arxiv complexity cost cs.ai cs.lg gnn gnns graph graph neural networks networks neural networks node predictions simple through type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Research Scientist
@ Meta | Menlo Park, CA
Principal Data Scientist
@ Mastercard | O'Fallon, Missouri (Main Campus)