all AI news
Proper Robustness Evaluation of Confidence-Calibrated Adversarial Training in PyTorch
Blog Archives • David Stutz davidstutz.de
Properly evaluating defenses against adversarial examples has been difficult as adversarial attacks need to be adapted to each individual defense. This also holds for confidence-calibrated adversarial training, where robustness is obtained by rejecting adversarial examples based on their confidence. Thus, regular robustness metrics and attacks are not easily applicable. In this article, I want to discuss how to evaluate confidence-calibrated adversarial training in terms of metrics and attacks.
The post Proper Robustness Evaluation of Confidence-Calibrated Adversarial Training in PyTorch …
adversarial attacks adversarial machine learning attacks blog computer vision confidence deep learning defense evaluation examples machine learning metrics python pytorch robustness training