Web: http://arxiv.org/abs/2201.08619

Jan. 24, 2022, 2:10 a.m. | Hua Ma, Yinshan Li, Yansong Gao, Alsharif Abuadbba, Zhi Zhang, Anmin Fu, Hyoungshick Kim, Said F. Al-Sarawi, Nepal Surya, Derek Abbott

cs.CV updates on arXiv.org arxiv.org

Deep learning models have been shown to be vulnerable to recent backdoor
attacks. A backdoored model behaves normally for inputs containing no
attacker-secretly-chosen trigger and maliciously for inputs with the trigger.
To date, backdoor attacks and countermeasures mainly focus on image
classification tasks. And most of them are implemented in the digital world
with digital triggers. Besides the classification tasks, object detection
systems are also considered as one of the basic foundations of computer vision
tasks. However, there is no …

arxiv attacks cv natural

More from arxiv.org / cs.CV updates on arXiv.org

Machine Learning Product Manager (Europe, Remote)

@ FreshBooks | Germany

Field Operations and Data Engineer, ADAS

@ Lucid Motors | Newark, CA

Machine Learning Engineer - Senior

@ Novetta | Reston, VA

Analytics Engineer

@ ThirdLove | Remote

Senior Machine Learning Infrastructure Engineer - Safety

@ Discord | San Francisco, CA or Remote

Internship, Data Scientist

@ Everstream Analytics | United States (Remote)