all AI news
On Inherent Adversarial Robustness of Active Vision Systems
April 2, 2024, 7:46 p.m. | Amitangshu Mukherjee, Timur Ibrayev, Kaushik Roy
cs.CV updates on arXiv.org arxiv.org
Abstract: Current Deep Neural Networks are vulnerable to adversarial examples, which alter their predictions by adding carefully crafted noise. Since human eyes are robust to such inputs, it is possible that the vulnerability stems from the standard way of processing inputs in one shot by processing every pixel with the same importance. In contrast, neuroscience suggests that the human vision system can differentiate salient features by (1) switching between multiple fixation points (saccades) and (2) processing …
abstract adversarial adversarial examples arxiv cs.ai cs.cv current every examples human inputs networks neural networks noise pixel predictions processing robust robustness standard systems type vision vulnerability vulnerable
More from arxiv.org / cs.CV updates on arXiv.org
Compact 3D Scene Representation via Self-Organizing Gaussian Grids
2 days, 10 hours ago |
arxiv.org
Fingerprint Matching with Localized Deep Representation
2 days, 10 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne