all AI news
Nash Equilibria and Pitfalls of Adversarial Training in Adversarial Robustness Games. (arXiv:2210.12606v2 [cs.LG] UPDATED)
Oct. 27, 2022, 1:12 a.m. | Maria-Florina Balcan, Rattana Pukdee, Pradeep Ravikumar, Hongyang Zhang
cs.LG updates on arXiv.org arxiv.org
Adversarial training is a standard technique for training adversarially
robust models. In this paper, we study adversarial training as an alternating
best-response strategy in a 2-player zero-sum game. We prove that even in a
simple scenario of a linear classifier and a statistical model that abstracts
robust vs. non-robust features, the alternating best response strategy of such
game may not converge. On the other hand, a unique pure Nash equilibrium of the
game exists and is provably robust. We support …
More from arxiv.org / cs.LG updates on arXiv.org
The Perception-Robustness Tradeoff in Deterministic Image Restoration
2 days, 3 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne