April 2, 2024, 7:46 p.m. | Amitangshu Mukherjee, Timur Ibrayev, Kaushik Roy

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.00185v1 Announce Type: new
Abstract: Current Deep Neural Networks are vulnerable to adversarial examples, which alter their predictions by adding carefully crafted noise. Since human eyes are robust to such inputs, it is possible that the vulnerability stems from the standard way of processing inputs in one shot by processing every pixel with the same importance. In contrast, neuroscience suggests that the human vision system can differentiate salient features by (1) switching between multiple fixation points (saccades) and (2) processing …

abstract adversarial adversarial examples arxiv cs.ai cs.cv current every examples human inputs networks neural networks noise pixel predictions processing robust robustness standard systems type vision vulnerability vulnerable

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne