all AI news
Language-Driven Anchors for Zero-Shot Adversarial Robustness
March 12, 2024, 4:44 a.m. | Xiao Li, Wei Zhang, Yining Liu, Zhanhao Hu, Bo Zhang, Xiaolin Hu
cs.LG updates on arXiv.org arxiv.org
Abstract: Deep Neural Networks (DNNs) are known to be susceptible to adversarial attacks. Previous researches mainly focus on improving adversarial robustness in the fully supervised setting, leaving the challenging domain of zero-shot adversarial robustness an open question. In this work, we investigate this domain by leveraging the recent advances in large vision-language models, such as CLIP, to introduce zero-shot adversarial robustness to DNNs. We propose LAAT, a Language-driven, Anchor-based Adversarial Training strategy. LAAT utilizes the features …
abstract advances adversarial adversarial attacks anchors arxiv attacks cs.ai cs.cv cs.lg domain focus language networks neural networks question robustness type work zero-shot
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Senior Data Scientist
@ Highmark Health | PA, Working at Home - Pennsylvania
Principal Data Scientist
@ Warner Bros. Discovery | CA San Francisco 153 Kearny Street