all AI news
47.9% Robust Test Error on CIFAR10 with Adversarial Training and PyTorch
Blog Archives • David Stutz davidstutz.de
Knowing how to compute adversarial examples from this previous article, it would be ideal to train models for which such adversarial examples do not exist. This is the goal of developing adversarially robust training procedures. In this article, I want to describe a particularly popular approach called adversarial training. The idea is to train on adversarial examples computed during training on-the-fly. I will also discuss a PyTorch implementation that obtains 47.9% robust test error — 52.1% robust accuracy — …
adversarial machine learning article blog compute deep learning error examples popular python pytorch test training