Web: http://arxiv.org/abs/2201.12211

Jan. 31, 2022, 2:11 a.m. | Siddhartha Datta, Nigel Shadbolt

cs.LG updates on arXiv.org arxiv.org

Malicious agents in collaborative learning and outsourced data collection
threaten the training of clean models. Backdoor attacks, where an attacker
poisons a model during training to successfully achieve targeted
misclassification, are a major concern to train-time robustness. In this paper,
we investigate a multi-agent backdoor attack scenario, where multiple attackers
attempt to backdoor a victim model simultaneously. A consistent backfiring
phenomenon is observed across a wide range of games, where agents suffer from a
low collective attack success rate. We …

arxiv attacks

More from arxiv.org / cs.LG updates on arXiv.org

Data Scientist

@ Fluent, LLC | Boca Raton, Florida, United States

Big Data ETL Engineer

@ Binance.US | Vancouver

Data Scientist / Data Engineer

@ Kin + Carta | Chicago

Data Engineer

@ Craft | Warsaw, Masovian Voivodeship, Poland

Senior Manager, Data Analytics Audit

@ Affirm | Remote US

Data Scientist - Nationwide Opportunities, AWS Professional Services

@ Amazon.com | US, NC, Virtual Location - N Carolina