Jan. 31, 2024, 3:46 p.m. | Shraban Kumar Chatterjee Suman Kundu

cs.LG updates on arXiv.org arxiv.org

GNNs are widely used to solve various tasks including node classification and link prediction. Most of the GNN architectures assume the initial embedding to be random or generated from popular distributions. These initial embeddings require multiple layers of transformation to converge into a meaningful latent representation. While number of layers allow accumulation of larger neighbourhood of a node it also introduce the problem of over-smoothing. In addition, GNNs are inept at representing structural information. For example, the output embedding of …

architectures classification converge cs.ai cs.lg cs.si embedding embeddings feature generated gnn gnns link prediction multiple node popular prediction random representation solve tasks transformation

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote