all AI news
Distal Adversarial Examples Against Neural Networks in PyTorch
Blog Archives • David Stutz davidstutz.de
Out-of-distribution examples are images that are cearly irrelevant to the task at hand. Unfortunately, deep neural networks frequently assign random labels with high confidence to such examples. In this article, I want to discuss an adversarial way of computing high-confidence out-of-distribution examples, so-called distal adversarial examples, and how confidence-calibrated adversarial training handles them.
The post Distal Adversarial Examples Against Neural Networks in PyTorch appeared first on David Stutz.
article blog computing confidence discuss distribution examples images labels networks neural networks pytorch random training