April 15, 2024, 4:43 a.m. | Matthias Freiberger, Peter Kun, Christian Igel, Anders Sundnes L{\o}vlie, Sebastian Risi

cs.LG updates on arXiv.org arxiv.org

arXiv:2307.03798v2 Announce Type: replace-cross
Abstract: Models leveraging both visual and textual data such as Contrastive Language-Image Pre-training (CLIP), are the backbone of many recent advances in artificial intelligence. In this work, we show that despite their versatility, such models are vulnerable to what we refer to as fooling master images. Fooling master images are capable of maximizing the confidence score of a CLIP model for a significant number of widely varying prompts, while being either unrecognizable or unrelated to the …

arxiv cs.ai cs.cv cs.lg cs.ne image language pre-trained models type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Scientist

@ Publicis Groupe | New York City, United States

Bigdata Cloud Developer - Spark - Assistant Manager

@ State Street | Hyderabad, India