April 24, 2023, 12:45 a.m. | Hangtao Zhang, Zeming Yao, Leo Yu Zhang, Shengshan Hu, Chao Chen, Alan Liew, Zhetao Li

cs.LG updates on arXiv.org arxiv.org

Federated learning (FL) is vulnerable to poisoning attacks, where adversaries
corrupt the global aggregation results and cause denial-of-service (DoS).
Unlike recent model poisoning attacks that optimize the amplitude of malicious
perturbations along certain prescribed directions to cause DoS, we propose a
Flexible Model Poisoning Attack (FMPA) that can achieve versatile attack goals.
We consider a practical threat scenario where no extra knowledge about the FL
system (e.g., aggregation rules or updates on benign devices) is available to
adversaries. FMPA exploits …

aggregation amplitude arxiv attacks construct control devices federated learning fine-grained global information knowledge poisoning attacks practical rules service updates vulnerable

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US