May 26, 2022, 1:11 a.m. | Chaitanya K. Joshi, Fayao Liu, Xu Xun, Jie Lin, Chuan-Sheng Foo

stat.ML updates on arXiv.org arxiv.org

Knowledge distillation is a learning paradigm for boosting resource-efficient
graph neural networks (GNNs) using more expressive yet cumbersome teacher
models. Past work on distillation for GNNs proposed the Local Structure
Preserving loss (LSP), which matches local structural relationships defined
over edges across the student and teacher's node embeddings. This paper studies
whether preserving the global topology of how the teacher embeds graph data can
be a more effective distillation objective for GNNs, as real-world graphs often
contain latent interactions and …

arxiv distillation graph graph neural networks knowledge networks neural networks representation

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Engineer - Data Science Operations

@ causaLens | London - Hybrid, England, United Kingdom

F0138 - LLM Developer (AI NLP)

@ Ubiquiti Inc. | Taipei

Staff Engineer, Database

@ Nagarro | Gurugram, India

Artificial Intelligence Assurance Analyst

@ Booz Allen Hamilton | USA, VA, McLean (8251 Greensboro Dr)