all AI news
Model Transferring Attacks to Backdoor HyperNetwork in Personalized Federated Learning. (arXiv:2201.07063v2 [cs.LG] UPDATED)
Jan. 20, 2022, 2:11 a.m. | Phung Lai, NhatHai Phan, Abdallah Khreishah, Issa Khalil, Xintao Wu
cs.LG updates on arXiv.org arxiv.org
This paper explores previously unknown backdoor risks in HyperNet-based
personalized federated learning (HyperNetFL) through poisoning attacks. Based
upon that, we propose a novel model transferring attack (called HNTROJ), i.e.,
the first of its kind, to transfer a local backdoor infected model to all
legitimate and personalized local models, which are generated by the HyperNetFL
model, through consistent and effective malicious local gradients computed
across all compromised clients in the whole training process. As a result,
HNTROJ reduces the number of …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Research Associate (Data Science/Information Engineering/Applied Mathematics/Information Technology)
@ Nanyang Technological University | NTU Main Campus, Singapore
Associate Director of Data Science and Analytics
@ Penn State University | Penn State University Park
Student Worker- Data Scientist
@ TransUnion | Israel - Tel Aviv
Vice President - Customer Segment Analytics Data Science Lead
@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India
Middle/Senior Data Engineer
@ Devexperts | Sofia, Bulgaria