March 19, 2024, 4:41 a.m. | Tingting Tang, Yue Niu, Salman Avestimehr, Murali Annavaram

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.10995v1 Announce Type: new
Abstract: Graph neural networks (GNNs) play a key role in learning representations from graph-structured data and are demonstrated to be useful in many applications. However, the GNN training pipeline has been shown to be vulnerable to node feature leakage and edge extraction attacks. This paper investigates a scenario where an attacker aims to recover private edge information from a trained GNN model. Previous studies have employed differential privacy (DP) to add noise directly to the adjacency …

abstract applications arxiv attacks cs.ai cs.cr cs.lg cs.si data edge extraction feature gnn gnns graph graph neural networks however key networks neural networks node paper pipeline role singular structured data training training pipeline type value vulnerable

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineering Manager, Generative AI - Characters

@ Meta | Bellevue, WA | Menlo Park, CA | Seattle, WA | New York City | San Francisco, CA

Senior Operations Research Analyst / Predictive Modeler

@ LinQuest | Colorado Springs, Colorado, United States