all AI news
Adversarial Attack via Dual-Stage Network Erosion. (arXiv:2201.00097v1 [cs.CV])
Jan. 4, 2022, 9:10 p.m. | Yexin Duan, Junhua Zou, Xingyu Zhou, Wu Zhang, Jin Zhang, Zhisong Pan
cs.CV updates on arXiv.org arxiv.org
Deep neural networks are vulnerable to adversarial examples, which can fool
deep models by adding subtle perturbations. Although existing attacks have
achieved promising results, it still leaves a long way to go for generating
transferable adversarial examples under the black-box setting. To this end,
this paper proposes to improve the transferability of adversarial examples, and
applies dual-stage feature-level perturbations to an existing model to
implicitly create a set of diverse models. Then these models are fused by the
longitudinal ensemble …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Business Intelligence Developer / Analyst
@ Transamerica | Work From Home, USA
Data Analyst (All Levels)
@ Noblis | Bethesda, MD, United States