all AI news
Collective Certified Robustness against Graph Injection Attacks
March 5, 2024, 2:43 p.m. | Yuni Lai, Bailin Pan, Kaihuang Chen, Yancheng Yuan, Kai Zhou
cs.LG updates on arXiv.org arxiv.org
Abstract: We investigate certified robustness for GNNs under graph injection attacks. Existing research only provides sample-wise certificates by verifying each node independently, leading to very limited certifying performance. In this paper, we present the first collective certificate, which certifies a set of target nodes simultaneously. To achieve it, we formulate the problem as a binary integer quadratic constrained linear programming (BQCLP). We further develop a customized linearization technique that allows us to relax the BQCLP into …
abstract arxiv attacks collective cs.cr cs.lg gnns graph node nodes paper performance research robustness sample set type wise
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US