all AI news
Guided Diffusion Model for Adversarial Purification from Random Noise. (arXiv:2206.10875v1 [cs.LG])
In this paper, we propose a novel guided diffusion purification approach to
provide a strong defense against adversarial attacks. Our model achieves 89.62%
robust accuracy under PGD-L_inf attack (eps = 8/255) on the CIFAR-10 dataset.
We first explore the essential correlations between unguided diffusion models
and randomized smoothing, enabling us to apply the models to certified
robustness. The empirical results show that our models outperform randomized
smoothing by 5% when the certified L2 radius r is larger than 0.5.