all AI news
Towards Transferable Adversarial Attacks on Vision Transformers. (arXiv:2109.04176v3 [cs.CV] UPDATED)
Jan. 4, 2022, 9:10 p.m. | Zhipeng Wei, Jingjing Chen, Micah Goldblum, Zuxuan Wu, Tom Goldstein, Yu-Gang Jiang
cs.CV updates on arXiv.org arxiv.org
Vision transformers (ViTs) have demonstrated impressive performance on a
series of computer vision tasks, yet they still suffer from adversarial
examples. % crafted in a similar fashion as CNNs. In this paper, we posit that
adversarial attacks on transformers should be specially tailored for their
architecture, jointly considering both patches and self-attention, in order to
achieve high transferability. More specifically, we introduce a dual attack
framework, which contains a Pay No Attention (PNA) attack and a PatchOut
attack, to improve …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Engineer - Data Science Operations
@ causaLens | London - Hybrid, England, United Kingdom
F0138 - LLM Developer (AI NLP)
@ Ubiquiti Inc. | Taipei
Staff Engineer, Database
@ Nagarro | Gurugram, India
Artificial Intelligence Assurance Analyst
@ Booz Allen Hamilton | USA, VA, McLean (8251 Greensboro Dr)