all AI news
Unraveling Privacy Risks of Individual Fairness in Graph Neural Networks
March 21, 2024, 4:43 a.m. | He Zhang, Xingliang Yuan, Shirui Pan
cs.LG updates on arXiv.org arxiv.org
Abstract: Graph neural networks (GNNs) have gained significant attraction due to their expansive real-world applications. To build trustworthy GNNs, two aspects - fairness and privacy - have emerged as critical considerations. Previous studies have separately examined the fairness and privacy aspects of GNNs, revealing their trade-off with GNN performance. Yet, the interplay between these two aspects remains unexplored. In this paper, we pioneer the exploration of the interaction between the privacy risks of edge leakage and …
abstract applications arxiv build cs.cy cs.lg fairness gnns graph graph neural networks networks neural networks privacy risks studies trade trade-off trustworthy type world
More from arxiv.org / cs.LG updates on arXiv.org
Digital Over-the-Air Federated Learning in Multi-Antenna Systems
2 days, 10 hours ago |
arxiv.org
Bagging Provides Assumption-free Stability
2 days, 10 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Field Sample Specialist (Air Sampling) - Eurofins Environment Testing – Pueblo, CO
@ Eurofins | Pueblo, CO, United States
Camera Perception Engineer
@ Meta | Sunnyvale, CA