April 17, 2023, 8:02 p.m. | M. Caner Tol, Saad Islam, Andrew J. Adiletta, Berk Sunar, Ziming Zhang

cs.LG updates on arXiv.org arxiv.org

State-of-the-art deep neural networks (DNNs) have been proven to be
vulnerable to adversarial manipulation and backdoor attacks. Backdoored models
deviate from expected behavior on inputs with predefined triggers while
retaining performance on clean data. Recent works focus on software simulation
of backdoor injection during the inference phase by modifying network weights,
which we find often unrealistic in practice due to restrictions in hardware.


In contrast, in this work for the first time, we present an end-to-end
backdoor injection attack realized …

art arxiv attacks backdoor behavior classifier clean data data dnn focus hardware inference network networks neural networks performance practice restrictions simulation software state vulnerable work

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Technology Consultant Master Data Management (w/m/d)

@ SAP | Walldorf, DE, 69190

Research Engineer, Computer Vision, Google Research

@ Google | Nairobi, Kenya