April 2, 2024, 7:45 p.m. | Yu Sun, Gaojian Xiong, Xianxun Yao, Kailang Ma, Jian Cui

cs.LG updates on arXiv.org arxiv.org

arXiv:2401.11748v3 Announce Type: replace-cross
Abstract: Deep gradient inversion attacks expose a serious threat to Federated Learning (FL) by accurately recovering private data from shared gradients. However, the state-of-the-art heavily relies on impractical assumptions to access excessive auxiliary data, which violates the basic data partitioning principle of FL. In this paper, a novel method, Gradient Inversion Attack using Practical Image Prior (GI-PIP), is proposed under a revised threat model. GI-PIP exploits anomaly detection models to capture the underlying distribution from fewer …

abstract art arxiv assumptions attacks basic cs.ai cs.cr cs.lg data data partitioning dataset federated learning gradient however partitioning pip private data state threat type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne