d
July 20, 2023, 4:48 p.m. | David Stutz

Blog Archives • David Stutz davidstutz.de

Properly evaluating defenses against adversarial examples has been difficult as adversarial attacks need to be adapted to each individual defense. This also holds for confidence-calibrated adversarial training, where robustness is obtained by rejecting adversarial examples based on their confidence. Thus, regular robustness metrics and attacks are not easily applicable. In this article, I want to discuss how to evaluate confidence-calibrated adversarial training in terms of metrics and attacks.


The post Proper Robustness Evaluation of Confidence-Calibrated Adversarial Training in PyTorch …

adversarial attacks adversarial machine learning attacks blog computer vision confidence deep learning defense evaluation examples machine learning metrics python pytorch robustness training

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Data Engineer (m/f/d)

@ Project A Ventures | Berlin, Germany

Principle Research Scientist

@ Analog Devices | US, MA, Boston