March 12, 2024, 4:50 a.m. | Xiaolei Liu, Ming Yi, Kangyi Ding, Bangzhou Xin, Yixiao Xu, Li Yan, Chao Shen

cs.CV updates on arXiv.org arxiv.org

arXiv:2304.10985v2 Announce Type: replace-cross
Abstract: Learning-based systems have been demonstrated to be vulnerable to backdoor attacks, wherein malicious users manipulate model performance by injecting backdoors into the target model and activating them with specific triggers. Previous backdoor attack methods primarily focused on two key metrics: attack success rate and stealthiness. However, these methods often necessitate significant privileges over the target model, such as control over the training process, making them challenging to implement in real-world scenarios. Moreover, the robustness of …

abstract arxiv attack methods attacks backdoor cs.ai cs.cr cs.cv however key metrics performance rate robust statistical success systems them type vulnerable

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist

@ Meta | Menlo Park, CA

Principal Data Scientist

@ Mastercard | O'Fallon, Missouri (Main Campus)