Jan. 14, 2022, 2:10 a.m. | Haya Brama, Lihi Dery, Tal Grinshpoun

cs.LG updates on arXiv.org arxiv.org

The problem of attacks on neural networks through input modification (i.e.,
adversarial examples) has attracted much attention recently. Being relatively
easy to generate and hard to detect, these attacks pose a security breach that
many suggested defenses try to mitigate. However, the evaluation of the effect
of attacks and defenses commonly relies on traditional classification metrics,
without adequate adaptation to adversarial scenarios. Most of these metrics are
accuracy-based, and therefore may have a limited scope and low distinctive
power. Other …

arxiv attacks metrics networks neural networks

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer

@ Parker | New York City

Sr. Data Analyst | Home Solutions

@ Three Ships | Raleigh or Charlotte, NC