all AI news
Understanding Zero-Shot Adversarial Robustness for Large-Scale Models. (arXiv:2212.07016v2 [cs.CV] UPDATED)
cs.CV updates on arXiv.org arxiv.org
Pretrained large-scale vision-language models like CLIP have exhibited strong
generalization over unseen tasks. Yet imperceptible adversarial perturbations
can significantly reduce CLIP's performance on new tasks. In this work, we
identify and explore the problem of \emph{adapting large-scale models for
zero-shot adversarial robustness}. We first identify two key factors during
model adaption -- training losses and adaptation methods -- that affect the
model's zero-shot adversarial robustness. We then propose a text-guided
contrastive adversarial training loss, which aligns the text embeddings and …
arxiv clip features identify language language models large-scale models loss losses performance reduce robustness scale small text training training loss understanding vision work