all AI news
Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning. (arXiv:2304.10783v1 [cs.LG])
cs.LG updates on arXiv.org arxiv.org
Federated learning (FL) is vulnerable to poisoning attacks, where adversaries
corrupt the global aggregation results and cause denial-of-service (DoS).
Unlike recent model poisoning attacks that optimize the amplitude of malicious
perturbations along certain prescribed directions to cause DoS, we propose a
Flexible Model Poisoning Attack (FMPA) that can achieve versatile attack goals.
We consider a practical threat scenario where no extra knowledge about the FL
system (e.g., aggregation rules or updates on benign devices) is available to
adversaries. FMPA exploits …
aggregation amplitude arxiv attacks construct control devices federated learning fine-grained global information knowledge poisoning attacks practical rules service updates vulnerable