March 20, 2024, 4:43 a.m. | Alexander Robey, Fabian Latorre, George J. Pappas, Hamed Hassani, Volkan Cevher

cs.LG updates on arXiv.org arxiv.org

arXiv:2306.11035v2 Announce Type: replace
Abstract: One prominent approach toward resolving the adversarial vulnerability of deep neural networks is the two-player zero-sum paradigm of adversarial training, in which predictors are trained against adversarially chosen perturbations of data. Despite the promise of this approach, algorithms based on this paradigm have not engendered sufficient levels of robustness and suffer from pathological behavior like robust overfitting. To understand this shortcoming, we first show that the commonly used surrogate-based relaxation used in adversarial training algorithms …

abstract adversarial adversarial training algorithms arxiv cs.lg data game math.oc networks neural networks paradigm stat.ml training type vulnerability zero-sum game

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Intern Large Language Models Planning (f/m/x)

@ BMW Group | Munich, DE

Data Engineer Analytics

@ Meta | Menlo Park, CA | Remote, US