all AI news
Distal Adversarial Examples Against Neural Networks in PyTorch
Out-of-distribution examples are images that are cearly irrelevant to the task at hand. Unfortunately, deep neural networks frequently assign random labels with high confidence to such examples. In this article, I want to discuss an adversarial way of computing high-confidence out-of-distribution examples, so-called distal adversarial examples, and how confidence-calibrated adversarial training handles them.
The post Distal Adversarial Examples Against Neural Networks in PyTorch appeared first on David Stutz.