April 23, 2024, 4:42 a.m. | Rong Wang, Guichen Zhou, Mingjun Gao, Yunpeng Xiao

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.13946v1 Announce Type: new
Abstract: In recent years, the neural network backdoor hidden in the parameters of the federated learning model has been proved to have great security risks. Considering the characteristics of trigger generation, data poisoning and model training in backdoor attack, this paper designs a backdoor attack method based on federated learning. Firstly, aiming at the concealment of the backdoor trigger, a TrojanGan steganography model with encoder-decoder structure is designed. The model can encode specific attack information as …

abstract arxiv backdoor cs.lg data data poisoning designs federated learning hidden network neural network paper parameters replacement risks security training type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne