Feb. 16, 2024, 5:42 a.m. | Enrique M\'armol Campos, Aurora Gonz\'alez Vidal, Jos\'e Luis Hern\'andez Ramos, Antonio Skarmeta

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.10082v1 Announce Type: new
Abstract: Federated Learning (FL) represents a promising approach to typical privacy concerns associated with centralized Machine Learning (ML) deployments. Despite its well-known advantages, FL is vulnerable to security attacks such as Byzantine behaviors and poisoning attacks, which can significantly degrade model performance and hinder convergence. The effectiveness of existing approaches to mitigate complex attacks, such as median, trimmed mean, or Krum aggregation functions, has been only partially demonstrated in the case of specific attacks. Our study …

abstract advantages aggregation arxiv attacks concerns cs.cr cs.lg deployments dynamic federated learning function machine machine learning performance poisoning attacks privacy robust security type vulnerable

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York