Web: http://arxiv.org/abs/2201.08802

Jan. 24, 2022, 2:10 a.m. | Ying-Xin (Shirley)Wu, Xiang Wang, An Zhang, Xia Hu, Fuli Feng, Xiangnan He, Tat-Seng Chua

cs.LG updates on arXiv.org arxiv.org

Explainability of graph neural networks (GNNs) aims to answer ``Why the GNN
made a certain prediction?'', which is crucial to interpret the model
prediction. The feature attribution framework distributes a GNN's prediction to
its input features (e.g., edges), identifying an influential subgraph as the
explanation. When evaluating the explanation (i.e., subgraph importance), a
standard way is to audit the model prediction based on the subgraph solely.
However, we argue that a distribution shift exists between the full graph and
the …

arxiv evaluation graph graph neural networks networks neural neural networks

More from arxiv.org / cs.LG updates on arXiv.org

Data Scientist

@ Fluent, LLC | Boca Raton, Florida, United States

Big Data ETL Engineer

@ Binance.US | Vancouver

Data Scientist / Data Engineer

@ Kin + Carta | Chicago

Data Engineer

@ Craft | Warsaw, Masovian Voivodeship, Poland

Senior Manager, Data Analytics Audit

@ Affirm | Remote US

Data Scientist - Nationwide Opportunities, AWS Professional Services

@ Amazon.com | US, NC, Virtual Location - N Carolina