April 2, 2024, 7:45 p.m. | Yu Sun, Gaojian Xiong, Xianxun Yao, Kailang Ma, Jian Cui

cs.LG updates on arXiv.org arxiv.org

arXiv:2401.11748v3 Announce Type: replace-cross
Abstract: Deep gradient inversion attacks expose a serious threat to Federated Learning (FL) by accurately recovering private data from shared gradients. However, the state-of-the-art heavily relies on impractical assumptions to access excessive auxiliary data, which violates the basic data partitioning principle of FL. In this paper, a novel method, Gradient Inversion Attack using Practical Image Prior (GI-PIP), is proposed under a revised threat model. GI-PIP exploits anomaly detection models to capture the underlying distribution from fewer …

abstract art arxiv assumptions attacks basic cs.ai cs.cr cs.lg data data partitioning dataset federated learning gradient however partitioning pip private data state threat type

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst (Commercial Excellence)

@ Allegro | Poznan, Warsaw, Poland

Senior Machine Learning Engineer

@ Motive | Pakistan - Remote

Summernaut Customer Facing Data Engineer

@ Celonis | Raleigh, US, North Carolina

Data Engineer Mumbai

@ Nielsen | Mumbai, India