June 6, 2024, 4:42 a.m. | Shuqi He, Jun Zhuang, Ding Wang, Luyao Peng, Jun Song

cs.LG updates on arXiv.org arxiv.org

arXiv:2406.03097v1 Announce Type: new
Abstract: Graph neural networks (GNNs) have been extensively employed in node classification. Nevertheless, recent studies indicate that GNNs are vulnerable to topological perturbations, such as adversarial attacks and edge disruptions. Considerable efforts have been devoted to mitigating these challenges. For example, pioneering Bayesian methodologies, including GraphSS and LlnDT, incorporate Bayesian label transitions and topology-based label sampling to strengthen the robustness of GNNs. However, GraphSS is hindered by slow convergence, while LlnDT faces challenges in sparse graphs. …

abstract adversarial adversarial attacks arxiv attacks bayesian challenges classification cs.ai cs.lg disruptions edge example gnns graph graph neural networks graphs networks neural networks node resilience studies type vulnerable

Senior Data Engineer

@ Displate | Warsaw

Decision Scientist

@ Tesco Bengaluru | Bengaluru, India

Senior Technical Marketing Engineer (AI/ML-powered Cloud Security)

@ Palo Alto Networks | Santa Clara, CA, United States

Associate Director, Technology & Data Lead - Remote

@ Novartis | East Hanover

Product Manager, Generative AI

@ Adobe | San Jose

Associate Director – Data Architect Corporate Functions

@ Novartis | Prague