Sept. 5, 2023, 5:36 p.m. | David Stutz

Blog Archives • David Stutz davidstutz.de

Out-of-distribution examples are images that are cearly irrelevant to the task at hand. Unfortunately, deep neural networks frequently assign random labels with high confidence to such examples. In this article, I want to discuss an adversarial way of computing high-confidence out-of-distribution examples, so-called distal adversarial examples, and how confidence-calibrated adversarial training handles them.

The post Distal Adversarial Examples Against Neural Networks in PyTorch appeared first on David Stutz.

article blog computing confidence discuss distribution examples images labels networks neural networks pytorch random training

Staff Research Scientist, AI/ML

@ Chan Zuckerberg Initiative | Redwood City, CA

Senior Machine Learning Engineer, Science

@ Chan Zuckerberg Initiative | Redwood City, California

AI Innovator in Healthcare

@ GAIA AG | Remote, Germany

Senior Machine Learning Engineer

@ Kintsugi | remote

Staff Machine Learning Engineer (Tech Lead)

@ Kintsugi | Remote

R_00029290 Lead Data Modeler – Remote

@ University at Buffalo | Austin, TX