March 6, 2024, 5:45 a.m. | Xijia Tao, Shuai Zhong, Lei Li, Qi Liu, Lingpeng Kong

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.02910v1 Announce Type: new
Abstract: There has been an increasing interest in the alignment of large language models (LLMs) with human values. However, the safety issues of their integration with a vision module, or vision language models (VLMs), remain relatively underexplored. In this paper, we propose a novel jailbreaking attack against VLMs, aiming to bypass their safety barrier when a user inputs harmful instructions. A scenario where our poisoned (image, text) data pairs are included in the training data is …

abstract alignment arxiv cs.ai cs.cv human image integration jailbreaking language language models large language large language models llms novel paper safety type values vision vision-language models vlms

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US