all AI news
Adversarial Training Should Be Cast as a Non-Zero-Sum Game
March 20, 2024, 4:43 a.m. | Alexander Robey, Fabian Latorre, George J. Pappas, Hamed Hassani, Volkan Cevher
cs.LG updates on arXiv.org arxiv.org
Abstract: One prominent approach toward resolving the adversarial vulnerability of deep neural networks is the two-player zero-sum paradigm of adversarial training, in which predictors are trained against adversarially chosen perturbations of data. Despite the promise of this approach, algorithms based on this paradigm have not engendered sufficient levels of robustness and suffer from pathological behavior like robust overfitting. To understand this shortcoming, we first show that the commonly used surrogate-based relaxation used in adversarial training algorithms …
abstract adversarial adversarial training algorithms arxiv cs.lg data game math.oc networks neural networks paradigm stat.ml training type vulnerability zero-sum game
More from arxiv.org / cs.LG updates on arXiv.org
Testable Learning with Distribution Shift
1 day, 4 hours ago |
arxiv.org
Quantum circuit synthesis with diffusion models
1 day, 4 hours ago |
arxiv.org
Fitness Approximation through Machine Learning
1 day, 4 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Senior Associate, Data and Analytics
@ Publicis Groupe | New York City, United States