Feb. 13, 2024, 5:45 a.m. | Matilde Tristany Farinha Thomas Ortner Giorgia Dellaferrera Benjamin Grewe Angeliki Pantazi

cs.LG updates on arXiv.org arxiv.org

Artificial Neural Networks (ANNs) trained with Backpropagation (BP) excel in different daily tasks but have a dangerous vulnerability: inputs with small targeted perturbations, also known as adversarial samples, can drastically disrupt their performance. Adversarial training, a technique in which the training dataset is augmented with exemplary adversarial samples, is proven to mitigate this problem but comes at a high computational cost. In contrast to ANNs, humans are not susceptible to misclassifying these same adversarial samples, so one can postulate that …

adversarial adversarial training anns artificial artificial neural networks backpropagation cs.lg daily dataset disrupt excel exemplary inputs intrinsic networks neural networks performance samples small tasks training vulnerability

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Data Engineering Manager

@ Microsoft | Redmond, Washington, United States

Machine Learning Engineer

@ Apple | San Diego, California, United States