Web: https://davidstutz.de/icml-2022-art-of-robustness-paper-on-fragile-features-and-batch-normalization-in-adversarial-training/

d
Aug. 5, 2022, 2:38 p.m. | David Stutz

Blog Archives • David Stutz davidstutz.de

While batch normalization has long been argued to increase adversarial vulnerability, it is still used in state-of-the-art adversarial training models. This is likely because of easier training and increased expressiveness. At the same time, recent papers argue that adversarial examples are partly caused by fragile features caused by learning spurious correlations. In this paper, we study the impact of batch normalization on utilizing these fragile features for robustness by fine-tuning only the batch normalization layers.


The post ICML 2022 Art …

adversarial machine learning art blog computer vision deep learning features icml normalization paper publication robustness training

More from davidstutz.de / Blog Archives • David Stutz

Engineering Manager, Machine Learning (Credit Engineering)

@ Affirm | Remote Poland

Sr Data Engineer

@ Rappi | [CO] Bogotá

Senior Analytics Engineer

@ GetGround | Porto

Senior Staff Software Engineer, Data Engineering

@ Galileo, Inc. | New York City or Remote

Data Engineer

@ Atlassian | Bengaluru, India

Data Engineer | Hybrid (Pune)

@ Velotio | Pune, Maharashtra, India