Web: http://arxiv.org/abs/2104.03413

Jan. 27, 2022, 2:11 a.m. | Yi Zeng, Won Park, Z. Morley Mao, Ruoxi Jia

cs.LG updates on arXiv.org arxiv.org

Backdoor attacks have been considered a severe security threat to deep
learning. Such attacks can make models perform abnormally on inputs with
predefined triggers and still retain state-of-the-art performance on clean
data. While backdoor attacks have been thoroughly investigated in the image
domain from both attackers' and defenders' sides, an analysis in the frequency
domain has been missing thus far.

This paper first revisits existing backdoor triggers from a frequency
perspective and performs a comprehensive analysis. Our results show that …

arxiv attacks perspective

More from arxiv.org / cs.LG updates on arXiv.org

Data Analytics and Technical support Lead

@ Coupa Software, Inc. | Bogota, Colombia

Data Science Manager

@ Vectra | San Jose, CA

Data Analyst Sr

@ Capco | Brazil - Sao Paulo

Data Scientist (NLP)

@ Builder.ai | London, England, United Kingdom - Remote

Senior Data Analyst

@ BuildZoom | Scottsdale, AZ/ San Francisco, CA/ Remote

Senior Research Scientist, Speech Recognition

@ SoundHound Inc. | Toronto, Canada