all AI news
Preventing Over-Smoothing for Hypergraph Neural Networks. (arXiv:2203.17159v2 [cs.LG] UPDATED)
Nov. 3, 2022, 1:12 a.m. | Guanzi Chen, Jiying Zhang, Xi Xiao, Yang Li
cs.LG updates on arXiv.org arxiv.org
In recent years, hypergraph learning has attracted great attention due to its
capacity in representing complex and high-order relationships. However, current
neural network approaches designed for hypergraphs are mostly shallow, thus
limiting their ability to extract information from high-order neighbors. In
this paper, we show both theoretically and empirically, that the performance of
hypergraph neural networks does not improve as the number of layers increases,
which is known as the over-smoothing problem. To avoid this issue, we develop a
new …
More from arxiv.org / cs.LG updates on arXiv.org
Training robust and generalizable quantum models
55 minutes ago |
arxiv.org
Causal Discovery Under Local Privacy
55 minutes ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Consultant Senior Power BI & Azure - CDI - H/F
@ Talan | Lyon, France