all AI news
Get Rid Of Your Trail: Remotely Erasing Backdoors in Federated Learning. (arXiv:2304.10638v1 [cs.LG])
cs.LG updates on arXiv.org arxiv.org
Federated Learning (FL) enables collaborative deep learning training across
multiple participants without exposing sensitive personal data. However, the
distributed nature of FL and the unvetted participants' data makes it
vulnerable to backdoor attacks. In these attacks, adversaries inject malicious
functionality into the centralized model during training, leading to
intentional misclassifications for specific adversary-chosen inputs. While
previous research has demonstrated successful injections of persistent
backdoors in FL, the persistence also poses a challenge, as their existence in
the centralized model can …
aggregation arxiv attacks backdoor challenge collaborative data deep learning deep learning training distributed federated learning multiple nature persistence personal data prompt research server training vulnerable