March 25, 2024, 4:42 a.m. | Yiwei Zhou, Xiaobo Xia, Zhiwei Lin, Bo Han, Tongliang Liu

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.14774v1 Announce Type: cross
Abstract: The vulnerability of deep neural networks to imperceptible adversarial perturbations has attracted widespread attention. Inspired by the success of vision-language foundation models, previous efforts achieved zero-shot adversarial robustness by aligning adversarial visual features with text supervision. However, in practice, they are still unsatisfactory due to several issues, including heavy adaptation cost, suboptimal text supervision, and uncontrolled natural generalization capacity. In this paper, to address these issues, we propose a few-shot adversarial prompt framework where adapting …

abstract adversarial arxiv attention cs.cl cs.cr cs.cv cs.lg features few-shot foundation however language language models networks neural networks practice prompt prompt learning robustness success supervision text type vision vision-language models visual vulnerability zero-shot

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineering Manager, Generative AI - Characters

@ Meta | Bellevue, WA | Menlo Park, CA | Seattle, WA | New York City | San Francisco, CA

Senior Operations Research Analyst / Predictive Modeler

@ LinQuest | Colorado Springs, Colorado, United States