April 2, 2024, 7:45 p.m. | Yu Sun, Gaojian Xiong, Xianxun Yao, Kailang Ma, Jian Cui

cs.LG updates on arXiv.org arxiv.org

arXiv:2401.11748v3 Announce Type: replace-cross
Abstract: Deep gradient inversion attacks expose a serious threat to Federated Learning (FL) by accurately recovering private data from shared gradients. However, the state-of-the-art heavily relies on impractical assumptions to access excessive auxiliary data, which violates the basic data partitioning principle of FL. In this paper, a novel method, Gradient Inversion Attack using Practical Image Prior (GI-PIP), is proposed under a revised threat model. GI-PIP exploits anomaly detection models to capture the underlying distribution from fewer …

abstract art arxiv assumptions attacks basic cs.ai cs.cr cs.lg data data partitioning dataset federated learning gradient however partitioning pip private data state threat type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US