March 8, 2024, 5:41 a.m. | Philipp Nazari, Oliver Lemke, Davide Guidobene, Artiom Gesp

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.04636v1 Announce Type: new
Abstract: Deep Graph Neural Networks struggle with oversmoothing. This paper introduces a novel, physics-inspired GNN model designed to mitigate this issue. Our approach integrates with existing GNN architectures, introducing an entropy-aware message passing term. This term performs gradient ascent on the entropy during node aggregation, thereby preserving a certain degree of entropy in the embeddings. We conduct a comparative analysis of our model against state-of-the-art GNNs across various common datasets.

abstract aggregation architectures arxiv cs.lg entropy gnn gradient graph graph neural networks issue networks neural networks node novel paper physics struggle type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist

@ Meta | Menlo Park, CA

Principal Data Scientist

@ Mastercard | O'Fallon, Missouri (Main Campus)