April 17, 2023, 8:13 p.m. | Jingyuan Wang, Yufan Wu, Mingxuan Li, Xin Lin, Junjie Wu, Chao Li

cs.CV updates on arXiv.org arxiv.org

While having achieved great success in rich real-life applications, deep
neural network (DNN) models have long been criticized for their vulnerability
to adversarial attacks. Tremendous research efforts have been dedicated to
mitigating the threats of adversarial attacks, but the essential trait of
adversarial examples is not yet clear, and most existing methods are yet
vulnerable to hybrid attacks and suffer from counterattacks. In light of this,
in this paper, we first reveal a gradient-based correlation between sensitivity
analysis-based DNN interpreters …

adversarial attacks analysis applications arxiv attacks correlation deep neural network defense dnn ensemble examples gradient hybrid interpretability interpreters kind life light network neural network paper process research safety success threats vulnerability vulnerable

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist (Computer Science)

@ Nanyang Technological University | NTU Main Campus, Singapore

Intern - Sales Data Management

@ Deliveroo | Dubai, UAE (Main Office)