all AI news
Mean Aggregator Is More Robust Than Robust Aggregators Under Label Poisoning Attacks
April 23, 2024, 4:41 a.m. | Jie Peng, Weiyu Li, Qing Ling
cs.LG updates on arXiv.org arxiv.org
Abstract: Robustness to malicious attacks is of paramount importance for distributed learning. Existing works often consider the classical Byzantine attacks model, which assumes that some workers can send arbitrarily malicious messages to the server and disturb the aggregation steps of the distributed learning process. To defend against such worst-case Byzantine attacks, various robust aggregators have been proven effective and much superior to the often-used mean aggregator. In this paper, we show that robust aggregators are too …
abstract aggregation arxiv attacks cs.lg distributed distributed learning importance mean messages poisoning attacks process robust robustness server type workers
More from arxiv.org / cs.LG updates on arXiv.org
Sliced Wasserstein with Random-Path Projecting Directions
2 days, 23 hours ago |
arxiv.org
Learning Extrinsic Dexterity with Parameterized Manipulation Primitives
2 days, 23 hours ago |
arxiv.org
The Un-Kidnappable Robot: Acoustic Localization of Sneaking People
2 days, 23 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York