March 14, 2024, 4:43 a.m. | Zhen Qin, Feiyi Chen, Chen Zhi, Xueqiang Yan, Shuiguang Deng

cs.LG updates on arXiv.org arxiv.org

arXiv:2309.16456v2 Announce Type: replace
Abstract: Existing approaches defend against backdoor attacks in federated learning (FL) mainly through a) mitigating the impact of infected models, or b) excluding infected models. The former negatively impacts model accuracy, while the latter usually relies on globally clear boundaries between benign and infected model updates. However, model updates are easy to be mixed and scattered throughout in reality due to the diverse distributions of local data. This work focuses on excluding infected models in FL. …

abstract accuracy arxiv attacks backdoor clear cs.lg elections federated learning impact impacts model accuracy perspective through type via

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US