all AI news
On Representation Knowledge Distillation for Graph Neural Networks. (arXiv:2111.04964v2 [cs.LG] UPDATED)
stat.ML updates on arXiv.org arxiv.org
Knowledge distillation is a learning paradigm for boosting resource-efficient
graph neural networks (GNNs) using more expressive yet cumbersome teacher
models. Past work on distillation for GNNs proposed the Local Structure
Preserving loss (LSP), which matches local structural relationships defined
over edges across the student and teacher's node embeddings. This paper studies
whether preserving the global topology of how the teacher embeds graph data can
be a more effective distillation objective for GNNs, as real-world graphs often
contain latent interactions and …
arxiv distillation graph graph neural networks knowledge networks neural networks representation