May 8, 2024, 4:46 a.m. | Georgios Pantazopoulos, Amit Parekh, Malvina Nikandrou, Alessandro Suglia

cs.CV updates on

arXiv:2405.04403v1 Announce Type: new
Abstract: Augmenting Large Language Models (LLMs) with image-understanding capabilities has resulted in a boom of high-performing Vision-Language models (VLMs). While studying the alignment of LLMs to human values has received widespread attention, the safety of VLMs has not received the same attention. In this paper, we explore the impact of jailbreaking on three state-of-the-art VLMs, each using a distinct modeling approach. By comparing each VLM to their respective LLM backbone, we find that each VLM is …

abstract alignment arxiv attacks attention boom capabilities human image instruction tuning jailbreak language language models large language large language models llms safety studying type understanding values vision vision-language vision-language models visual vlms while

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US