March 11, 2024, 4:42 a.m. | Ping Guo, Cheng Gong, Xi Lin, Zhiyuan Yang, Qingfu Zhang

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.05100v1 Announce Type: cross
Abstract: The escalating threat of adversarial attacks on deep learning models, particularly in security-critical fields, has underscored the need for robust deep learning systems. Conventional robustness evaluations have relied on adversarial accuracy, which measures a model's performance under a specific perturbation intensity. However, this singular metric does not fully encapsulate the overall resilience of a model against varying degrees of perturbation. To address this gap, we propose a new metric termed adversarial hypervolume, assessing the robustness …

abstract accuracy adversarial adversarial attacks arxiv attacks cs.ai cs.cr cs.cv cs.lg deep learning fields however intensity learning systems performance robust robustness security singular s performance systems threat type via

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist

@ Meta | Menlo Park, CA

Principal Data Scientist

@ Mastercard | O'Fallon, Missouri (Main Campus)