April 10, 2024, 4:43 a.m. | Miguel Lerma, Mirtha Lucas

cs.LG updates on arXiv.org arxiv.org

arXiv:2307.03305v3 Announce Type: replace
Abstract: We discuss a vulnerability involving a category of attribution methods used to provide explanations for the outputs of convolutional neural networks working as classifiers. It is known that this type of networks are vulnerable to adversarial attacks, in which imperceptible perturbations of the input may alter the outputs of the model. In contrast, here we focus on effects that small modifications in the model may cause on the attribution method without altering the model outputs.

abstract adversarial adversarial attacks arxiv attacks attribution classifiers convolutional neural networks cs.ai cs.lg discuss networks neural networks softmax type vulnerability vulnerable

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Scientist

@ ITE Management | New York City, United States