all AI news
Generalizing Adversarial Robustness with Confidence-Calibrated Adversarial Training in PyTorch
Blog Archives • David Stutz davidstutz.de
Taking adversarial training from this previous article as baseline, this article introduces a new, confidence-calibrated variant of adversarial training that addresses two significant flaws: First, trained with L∞ adversarial examples, adversarial training is not robust against L2 ones. Second, it incurs a significant increase in (clean) test error. Confidence-calibrated adversarial training addresses these problems by encouraging lower confidence on adversarial examples and subsequently rejecting them.
The post Generalizing Adversarial Robustness with Confidence-Calibrated Adversarial Training in PyTorch appeared first …
adversarial machine learning article blog computer vision confidence deep learning error examples flaws machine learning python pytorch robustness test training