all AI news
Adversarial amplitude swap towards robust image classifiers. (arXiv:2203.07138v3 [cs.CV] UPDATED)
April 4, 2022, 1:10 a.m. | Chun Yang Tan, Kazuhiko Kawamoto, Hiroshi Kera
cs.CV updates on arXiv.org arxiv.org
The vulnerability of convolutional neural networks (CNNs) to image
perturbations such as common corruptions and adversarial perturbations has
recently been investigated from the perspective of frequency. In this study, we
investigate the effect of the amplitude and phase spectra of adversarial images
on the robustness of CNN classifiers. Extensive experiments revealed that the
images generated by combining the amplitude spectrum of adversarial images and
the phase spectrum of clean images accommodates moderate and general
perturbations, and training with these images …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Analyst (H/F)
@ Business & Decision | Montpellier, France
Machine Learning Researcher
@ VERSES | Brighton, England, United Kingdom - Remote