April 24, 2023, 12:45 a.m. | Manaar Alam, Hithem Lamri, Michail Maniatakos

cs.LG updates on arXiv.org arxiv.org

Federated Learning (FL) enables collaborative deep learning training across
multiple participants without exposing sensitive personal data. However, the
distributed nature of FL and the unvetted participants' data makes it
vulnerable to backdoor attacks. In these attacks, adversaries inject malicious
functionality into the centralized model during training, leading to
intentional misclassifications for specific adversary-chosen inputs. While
previous research has demonstrated successful injections of persistent
backdoors in FL, the persistence also poses a challenge, as their existence in
the centralized model can …

aggregation arxiv attacks backdoor challenge collaborative data deep learning deep learning training distributed federated learning multiple nature persistence personal data prompt research server training vulnerable

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Applied Scientist, Control Stack, AWS Center for Quantum Computing

@ Amazon.com | Pasadena, California, USA

Specialist Marketing with focus on ADAS/AD f/m/d

@ AVL | Graz, AT

Machine Learning Engineer, PhD Intern

@ Instacart | United States - Remote

Supervisor, Breast Imaging, Prostate Center, Ultrasound

@ University Health Network | Toronto, ON, Canada

Senior Manager of Data Science (Recommendation Science)

@ NBCUniversal | New York, NEW YORK, United States