Feb. 22, 2024, 5:42 a.m. | Christopher Hojny, Shiqiang Zhang, Juan S. Campos, Ruth Misener

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.13937v1 Announce Type: cross
Abstract: Since graph neural networks (GNNs) are often vulnerable to attack, we need to know when we can trust them. We develop a computationally effective approach towards providing robust certificates for message-passing neural networks (MPNNs) using a Rectified Linear Unit (ReLU) activation function. Because our work builds on mixed-integer optimization, it encodes a wide variety of subproblems, for example it admits (i) both adding and removing edges, (ii) both global and local budgets, and (iii) both …

abstract arxiv cs.lg function gnns graph graph neural networks linear math.oc networks neural networks relu robust them topology trust type via vulnerable work

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Analyst

@ Alstom | Johannesburg, GT, ZA