April 24, 2024, 4:43 a.m. | Luke Bailey, Euan Ong, Stuart Russell, Scott Emmons

cs.LG updates on arXiv.org arxiv.org

arXiv:2309.00236v3 Announce Type: replace
Abstract: Are foundation models secure against malicious actors? In this work, we focus on the image input to a vision-language model (VLM). We discover image hijacks, adversarial images that control the behaviour of VLMs at inference time, and introduce the general Behaviour Matching algorithm for training image hijacks. From this, we derive the Prompt Matching method, allowing us to train hijacks matching the behaviour of an arbitrary user-defined text prompt (e.g. 'the Eiffel Tower is now …

abstract actors adversarial algorithm arxiv control cs.cl cs.cr cs.lg focus foundation general generative generative models image images inference language language model training type vision vision-language vlm vlms work

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne