Feb. 7, 2024, 5:48 a.m. | Hao Wang Xin Zhang Jinzhe Jiang Yaqian Zhao Chen Li

cs.CL updates on arXiv.org arxiv.org

As Large Language Models make a breakthrough in natural language processing tasks (NLP), multimodal technique becomes extremely popular. However, it has been shown that multimodal NLP are vulnerable to adversarial attacks, where the outputs of a model can be dramatically changed by a perturbation to the input. While several defense techniques have been proposed both in computer vision and NLP models, the multimodal robustness of models have not been fully explored. In this paper, we study the adversarial robustness provided …

adversarial adversarial attacks attacks cs.ai cs.cl defense language language models language processing large language large language models loss multimodal natural natural language natural language processing nlp popular processing robustness softmax tasks vision vision-language models vulnerable

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US