all AI news
Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World. (arXiv:2201.08619v1 [cs.CV])
Jan. 24, 2022, 2:10 a.m. | Hua Ma, Yinshan Li, Yansong Gao, Alsharif Abuadbba, Zhi Zhang, Anmin Fu, Hyoungshick Kim, Said F. Al-Sarawi, Nepal Surya, Derek Abbott
cs.CV updates on arXiv.org arxiv.org
Deep learning models have been shown to be vulnerable to recent backdoor
attacks. A backdoored model behaves normally for inputs containing no
attacker-secretly-chosen trigger and maliciously for inputs with the trigger.
To date, backdoor attacks and countermeasures mainly focus on image
classification tasks. And most of them are implemented in the digital world
with digital triggers. Besides the classification tasks, object detection
systems are also considered as one of the basic foundations of computer vision
tasks. However, there is no …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Senior ML Researcher - 3D Geometry Processing | 3D Shape Generation | 3D Mesh Data
@ Promaton | Europe
Data Scientist
@ Motive | India - Remote
Senior Perception Engineer
@ NVIDIA | US, CA, Santa Clara
Business Data Analyst, Finance and Treasury Data Repositories, Senior Associate
@ State Street | Krakow, Poland
Junior AI Engineer (Internship)
@ Sony | SEU - Italy - Roma
Manager, Data Science 3
@ PayPal | USA - Pennsylvania - Virtual