March 26, 2024, 4:42 a.m. | Georgii Mikriukov, Gesina Schwalbe, Franz Motzkus, Korinna Bade

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.16782v1 Announce Type: new
Abstract: Adversarial attacks (AAs) pose a significant threat to the reliability and robustness of deep neural networks. While the impact of these attacks on model predictions has been extensively studied, their effect on the learned representations and concepts within these models remains largely unexplored. In this work, we perform an in-depth analysis of the influence of AAs on the concepts learned by convolutional neural networks (CNNs) using eXplainable artificial intelligence (XAI) techniques. Through an extensive set …

abstract adversarial adversarial attacks arxiv attacks concept concepts cs.ai cs.cv cs.lg impact networks neural networks predictions reliability robustness threat type work xai

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist, Demography and Survey Science, University Grad

@ Meta | Menlo Park, CA | New York City

Computer Vision Engineer, XR

@ Meta | Burlingame, CA