all AI news
Semantically Stealthy Adversarial Attacks against Segmentation Models. (arXiv:2104.01732v3 [cs.CV] UPDATED)
Jan. 10, 2022, 2:10 a.m. | Zhenhua Chen, Chuhua Wang, David J. Crandall
cs.CV updates on arXiv.org arxiv.org
Segmentation models have been found to be vulnerable to targeted and
non-targeted adversarial attacks. However, the resulting segmentation outputs
are often so damaged that it is easy to spot an attack. In this paper, we
propose semantically stealthy adversarial attacks which can manipulate targeted
labels while preserving non-targeted labels at the same time. One challenge is
making semantically meaningful manipulations across datasets and models.
Another challenge is avoiding damaging non-targeted labels. To solve these
challenges, we consider each input image …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Technology Consultant Master Data Management (w/m/d)
@ SAP | Walldorf, DE, 69190
Research Engineer, Computer Vision, Google Research
@ Google | Nairobi, Kenya