all AI news
Resisting Backdoor Attacks in Federated Learning via Bidirectional Elections and Individual Perspective
March 14, 2024, 4:43 a.m. | Zhen Qin, Feiyi Chen, Chen Zhi, Xueqiang Yan, Shuiguang Deng
cs.LG updates on arXiv.org arxiv.org
Abstract: Existing approaches defend against backdoor attacks in federated learning (FL) mainly through a) mitigating the impact of infected models, or b) excluding infected models. The former negatively impacts model accuracy, while the latter usually relies on globally clear boundaries between benign and infected model updates. However, model updates are easy to be mixed and scattered throughout in reality due to the diverse distributions of local data. This work focuses on excluding infected models in FL. …
abstract accuracy arxiv attacks backdoor clear cs.lg elections federated learning impact impacts model accuracy perspective through type via
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Developer AI Senior Staff Engineer, Machine Learning
@ Google | Sunnyvale, CA, USA; New York City, USA
Engineer* Cloud & Data Operations (f/m/d)
@ SICK Sensor Intelligence | Waldkirch (bei Freiburg), DE, 79183