all AI news
Verifying message-passing neural networks via topology-based bounds tightening
Feb. 22, 2024, 5:42 a.m. | Christopher Hojny, Shiqiang Zhang, Juan S. Campos, Ruth Misener
cs.LG updates on arXiv.org arxiv.org
Abstract: Since graph neural networks (GNNs) are often vulnerable to attack, we need to know when we can trust them. We develop a computationally effective approach towards providing robust certificates for message-passing neural networks (MPNNs) using a Rectified Linear Unit (ReLU) activation function. Because our work builds on mixed-integer optimization, it encodes a wide variety of subproblems, for example it admits (i) both adding and removing edges, (ii) both global and local budgets, and (iii) both …
abstract arxiv cs.lg function gnns graph graph neural networks linear math.oc networks neural networks relu robust them topology trust type via vulnerable work
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Business Data Analyst
@ Alstom | Johannesburg, GT, ZA