all AI news
GraphMU: Repairing Robustness of Graph Neural Networks via Machine Unlearning
June 21, 2024, 4:48 a.m. | Tao Wu, Xinwen Cao, Chao Wang, Shaojie Qiao, Xingping Xian, Lin Yuan, Canyixing Cui, Yanbing Liu
cs.LG updates on arXiv.org arxiv.org
Abstract: Graph Neural Networks (GNNs) have demonstrated significant application potential in various fields. However, GNNs are still vulnerable to adversarial attacks. Numerous adversarial defense methods on GNNs are proposed to address the problem of adversarial attacks. However, these methods can only serve as a defense before poisoning, but cannot repair poisoned GNN. Therefore, there is an urgent need for a method to repair poisoned GNN. In this paper, we address this gap by introducing the novel …
abstract adversarial adversarial attacks application arxiv attacks cs.lg cs.si defense fields gnns graph graph neural networks however machine networks neural networks potential problem robustness serve type unlearning via vulnerable
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Focused Biochemistry Postdoctoral Fellow
@ Lawrence Berkeley National Lab | Berkeley, CA
Senior Data Engineer
@ Displate | Warsaw
PhD Student AI simulation electric drive (f/m/d)
@ Volkswagen Group | Kassel, DE, 34123
AI Privacy Research Lead
@ Leidos | 6314 Remote/Teleworker US
Senior Platform System Architect, Silicon
@ Google | New Taipei, Banqiao District, New Taipei City, Taiwan
Fabrication Hardware Litho Engineer, Quantum AI
@ Google | Goleta, CA, USA