all AI news
Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks
March 6, 2024, 5:42 a.m. | Yichang Xu, Ming Yin, Minghong Fang, Neil Zhenqiang Gong
cs.LG updates on arXiv.org arxiv.org
Abstract: Recent studies have revealed that federated learning (FL), once considered secure due to clients not sharing their private data with the server, is vulnerable to attacks such as client-side training data distribution inference, where a malicious client can recreate the victim's data. While various countermeasures exist, they are not practical, often assuming server access to some training data or knowledge of label distribution before the attack.
In this work, we bridge the gap by proposing …
abstract arxiv attacks client cs.cr cs.dc cs.lg data distribution federated learning inference private data robust server studies training training data type vulnerable
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Alternance DATA/AI Engineer (H/F)
@ SQLI | Le Grand-Quevilly, France